Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-22 Thread J Storrs Hall, PhD
On Friday 21 December 2007 09:51:13 pm, Ed Porter wrote: As a lawyer, I can tell you there is no clear agreed upon definition for most words, but that doesn't stop most of us from using un-clearly defined words productively many times every day for communication with others. If you can only

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: --- Stan Nilsen [EMAIL PROTECTED] wrote: Matt, Thanks for the links sent earlier. I especially like the paper by Legg and Hutter regarding measurement of machine intelligence. The other paper I find

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: --- Stan Nilsen [EMAIL PROTECTED] wrote: Matt, Thanks for the links sent earlier. I especially like the paper by Legg and Hutter regarding measurement of machine intelligence. The other paper I find difficult,

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: Still more nonsense: as I have pointed out before, Hutter's implied definitions of agent and environment and intelligence are not connected to real world usages of those terms, because he allows all of these things to depend on infinities

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Vladimir Nesov
On Dec 21, 2007 6:56 PM, Matt Mahoney [EMAIL PROTECTED] wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: Still more nonsense: as I have pointed out before, Hutter's implied definitions of agent and environment and intelligence are not connected to real world usages of those terms,

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Matt Mahoney
--- Vladimir Nesov [EMAIL PROTECTED] wrote: On Dec 21, 2007 6:56 PM, Matt Mahoney [EMAIL PROTECTED] wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: Still more nonsense: as I have pointed out before, Hutter's implied definitions of agent and environment and intelligence are not

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Vladimir Nesov
On Dec 21, 2007 10:36 PM, Matt Mahoney [EMAIL PROTECTED] wrote: The problem here seems to be that we can't agree on a useful definition of intelligence. As a practical matter, we are interested in an agent meeting goals in a specific environment, or a finite set of environments, not all

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Mike Tintner
Matt: Humans cannot recognize intelligence superior to their own. This like this whole thread is not totally but highly unimaginative. No one is throwing out any interesting ideas about what a superior intelligence might entail. Mainly it's the same old mathematical, linear approach.

RE: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Ed Porter
I fail to see why it would not at least be considered likely that a mechanical brain that could do all the major useful mental processes the human mind does, but do them much faster over a much, much larger recorded body of experience and learning, would not be capable of greater intelligence

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread aiguy
How about how many useful patents the AGI can lay claim to in a year. We feed in all the world's major problems and ask it for any inventions which would provide cost effictive partial solutions towards solving these problems. Obviously there will be many alternate problems and solution paths

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Matt Mahoney
--- Vladimir Nesov [EMAIL PROTECTED] wrote: Intelligence is 'what brains do' --- Mike Tintner [EMAIL PROTECTED] wrote: Don't you read any superhero/superpower comics or sci-fi? Obviously there are an infinite number of very recognisable forms which a superhuman intelligence could take. ---

RE: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Ed Porter
to have more of it than we do within a decade or two. Ed Porter -Original Message- From: Matt Mahoney [mailto:[EMAIL PROTECTED] Sent: Friday, December 21, 2007 5:34 PM To: agi@v2.listbox.com Subject: Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity) --- Vladimir

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-20 Thread Stan Nilsen
Matt, Thanks for the links sent earlier. I especially like the paper by Legg and Hutter regarding measurement of machine intelligence. The other paper I find difficult, probably it's deeper than I am. comment on two things: 1) The response Intelligence has nothing to do with

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-20 Thread Matt Mahoney
--- Stan Nilsen [EMAIL PROTECTED] wrote: Matt, Thanks for the links sent earlier. I especially like the paper by Legg and Hutter regarding measurement of machine intelligence. The other paper I find difficult, probably it's deeper than I am. The AIXI paper is essentially a proof of

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-20 Thread Richard Loosemore
Matt Mahoney wrote: --- Stan Nilsen [EMAIL PROTECTED] wrote: Matt, Thanks for the links sent earlier. I especially like the paper by Legg and Hutter regarding measurement of machine intelligence. The other paper I find difficult, probably it's deeper than I am. The AIXI paper is