Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Vladimir Nesov
On Feb 20, 2008 6:13 AM, Stathis Papaioannou [EMAIL PROTECTED] wrote: The possibility of mind uploading to computers strictly depends on functionalism being true; if it isn't then you may as well shoot yourself in the head as undergo a destructive upload. Functionalism (invented, and later

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Stan Nilsen
Vladimir Nesov wrote: On Feb 20, 2008 6:13 AM, Stathis Papaioannou [EMAIL PROTECTED] wrote: The possibility of mind uploading to computers strictly depends on functionalism being true; if it isn't then you may as well shoot yourself in the head as undergo a destructive upload. Functionalism

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Richard Loosemore
Stathis Papaioannou wrote: On 20/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote: I am aware of some of those other sources for the idea: nevertheless, they are all nonsense for the same reason. I especially single out Searle: his writings on this subject are virtually worthless. I have

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread gifting
Quoting Vladimir Nesov [EMAIL PROTECTED]: On Feb 20, 2008 6:13 AM, Stathis Papaioannou [EMAIL PROTECTED] wrote: The possibility of mind uploading to computers strictly depends on functionalism being true; if it isn't then you may as well shoot yourself in the head as undergo a destructive

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread John Ku
On 2/20/08, Stan Nilsen [EMAIL PROTECTED] wrote: It seems that when philosophy is implemented it becomes like nuclear physics e.g. break down all the things we essentially understand until we come up with pieces, which we give names to, and then admit we don't know what the names identify -

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Richard Loosemore
John Ku wrote: By the way, I think this whole tangent was actually started by Richard misinterpreting Lanier's argument (though quite understandably given Lanier's vagueness and unclarity). Lanier was not imagining the amazing coincidence of a genuine computer being implemented in a rainstorm,

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Stathis Papaioannou
On 21/02/2008, John Ku [EMAIL PROTECTED] wrote: By the way, I think this whole tangent was actually started by Richard misinterpreting Lanier's argument (though quite understandably given Lanier's vagueness and unclarity). Lanier was not imagining the amazing coincidence of a genuine

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread John Ku
On 2/20/08, Stathis Papaioannou [EMAIL PROTECTED] wrote: On 21/02/2008, John Ku [EMAIL PROTECTED] wrote: By the way, I think this whole tangent was actually started by Richard misinterpreting Lanier's argument (though quite understandably given Lanier's vagueness and unclarity). Lanier

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Stathis Papaioannou
On 21/02/2008, John Ku [EMAIL PROTECTED] wrote: On 2/20/08, Stathis Papaioannou [EMAIL PROTECTED] wrote: On 21/02/2008, John Ku [EMAIL PROTECTED] wrote: By the way, I think this whole tangent was actually started by Richard misinterpreting Lanier's argument (though quite

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-19 Thread Stathis Papaioannou
On 19/02/2008, John Ku [EMAIL PROTECTED] wrote: Yes, you've shown either that, or that even some occasionally intelligent and competent philosophers sometimes take seriously ideas that really can be dismissed as obviously ridiculous -- ideas which really are unworthy of careful thought were

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-19 Thread Richard Loosemore
Stathis Papaioannou wrote: On 19/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote: Sorry, but I do not think your conclusion even remotely follows from the premises. But beyond that, the basic reason that this line of argument is nonsensical is that Lanier's thought experiment was rigged in

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-19 Thread Eric B. Ramsay
During the late 70's when I was at McGill, I attended a public talk given by Feynman on quantum physics. After the talk, and in answer to a question posed from a member of the audience, Feynman said something along the lines of : I have here in my pocket a prescription from my doctor that

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-18 Thread Richard Loosemore
Stathis Papaioannou wrote: On 18/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote: [snip] But again, none of this touches upon Lanier's attempt to draw a bogus conclusion from his thought experiment. No external observer would ever be able to keep track of such a fragmented computation and

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-18 Thread Stathis Papaioannou
On 19/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote: Sorry, but I do not think your conclusion even remotely follows from the premises. But beyond that, the basic reason that this line of argument is nonsensical is that Lanier's thought experiment was rigged in such a way that a

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-18 Thread John Ku
On 2/18/08, Stathis Papaioannou [EMAIL PROTECTED] wrote: By the way, Lanier's idea is not original. Hilary Putnam, John Searle, Tim Maudlin, Greg Egan, Hans Moravec, David Chalmers (see the paper cited by Kaj Sotola in the original thread - http://consc.net/papers/rock.html) have all

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread John Ku
On 2/17/08, Stathis Papaioannou [EMAIL PROTECTED] wrote: If computation is multiply realizable, it could be seen as being implemented by an endless variety of physical systems, with the right mapping or interpretation, since anything at all could be arbitrarily chosen to represent a tape, a

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Richard Loosemore
Stathis Papaioannou wrote: On 17/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote: The first problem arises from Lanier's trick of claiming that there is a computer, in the universe of all possible computers, that has a machine architecture and a machine state that is isomorphic to BOTH the

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: When people like Lanier allow themselves the luxury of positing infinitely large computers (who else do we know who does this? Ah, yes, the AIXI folks), they can make infinitely unlikely coincidences happen. It is a commonly accepted practice

Re: [singularity] AI critique by Jaron Lanier

2008-02-17 Thread Matt Mahoney
--- John Ku [EMAIL PROTECTED] wrote: On 2/16/08, Matt Mahoney [EMAIL PROTECTED] wrote: I would prefer to leave behind these counterfactuals altogether and try to use information theory and control theory to achieve a precise understanding of what it is for something to be the

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: When people like Lanier allow themselves the luxury of positing infinitely large computers (who else do we know who does this? Ah, yes, the AIXI folks), they can make infinitely unlikely coincidences happen. It is a commonly

Re: [singularity] AI critique by Jaron Lanier

2008-02-17 Thread John Ku
On 2/17/08, Matt Mahoney [EMAIL PROTECTED] wrote: Nevertheless we can make similar reductions to absurdity with respect to qualia, that which distinguishes you from a philosophical zombie. There is no experiment to distinguish whether you actually experience redness when you see a red

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Stathis Papaioannou
On 18/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote: The last statement you make, though, is not quite correct: with a jumbled up sequence of episodes during which the various machines were running the brain code, he whole would lose its coherence, because input from the world would now

Re: [singularity] AI critique by Jaron Lanier

2008-02-17 Thread Matt Mahoney
--- John Ku [EMAIL PROTECTED] wrote: On 2/17/08, Matt Mahoney [EMAIL PROTECTED] wrote: Nevertheless we can make similar reductions to absurdity with respect to qualia, that which distinguishes you from a philosophical zombie. There is no experiment to distinguish whether you actually

Re: [singularity] AI critique by Jaron Lanier

2008-02-16 Thread Matt Mahoney
--- John Ku [EMAIL PROTECTED] wrote: On 2/15/08, Eric B. Ramsay [EMAIL PROTECTED] wrote: http://www.jaronlanier.com/aichapter.html I take it the target of his rainstorm argument is the idea that the essential features of consciousness are its information-processing properties. I

Re: [singularity] AI critique by Jaron Lanier

2008-02-16 Thread Richard Loosemore
Eric B. Ramsay wrote: I don't know when Lanier wrote the following but I would be interested to know what the AI folks here think about his critique (or direct me to a thread where this was already discussed). Also would someone be able to re-state his rainstorm thought experiment more clearly

Re: [singularity] AI critique by Jaron Lanier

2008-02-16 Thread Stathis Papaioannou
On 17/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote: Lanier's rainstorm argument is spurious nonsense. That's the response of most functionalists, but an explanation as to why it is spurious nonsense is needed. And some such as Hans Moravec have actually conceded that the argument is

Re: [singularity] AI critique by Jaron Lanier

2008-02-16 Thread John Ku
On 2/16/08, Matt Mahoney [EMAIL PROTECTED] wrote: I would prefer to leave behind these counterfactuals altogether and try to use information theory and control theory to achieve a precise understanding of what it is for something to be the standard(s) in terms of which we are able to

Re: [singularity] AI critique by Jaron Lanier

2008-02-15 Thread Matt Mahoney
--- Eric B. Ramsay [EMAIL PROTECTED] wrote: I don't know when Lanier wrote the following but I would be interested to know what the AI folks here think about his critique (or direct me to a thread where this was already discussed). Also would someone be able to re-state his rainstorm thought

Re: [singularity] AI critique by Jaron Lanier

2008-02-15 Thread Stathis Papaioannou
On 16/02/2008, Kaj Sotala [EMAIL PROTECTED] wrote: However, despite what is claimed, not every physical process can be interpreted to do any computation. To do such an interpretation, you have to do so after the fact: after all the raindrops have fallen, you can assign their positions formal