[agi] Re: Accidental Genius

2008-05-09 Thread Brad Paulsen
Ryan, Thanks for the clarifications and the links! Cheers, Brad - Original Message - From: Bryan Bishop [EMAIL PROTECTED] To: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Wednesday, May 07, 2008 9:46 PM Subject: Re: Accidental Genius On Wed,

Re: [agi] Re: pattern definition

2008-05-09 Thread Mike Tintner
Boris: I define intelligence as an ability to predict/plan by discovering projecting patterns within an input flow. IOW a capacity to generalize. A general intelligence is something that generalizes from incoming info. about the world. Well, no it can't be just that. Look at what you write

Re: Symbol Grounding [WAS Re: [agi] AGI-08 videos]

2008-05-09 Thread Jim Bromer
- Original Message From: Mike Tintner [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Thursday, May 8, 2008 9:05:22 PM Subject: Re: Symbol Grounding [WAS Re: [agi] AGI-08 videos] I just want to make the point that I think categorical grounding is necessary for AGI, but I believe

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Vladimir Nesov
On Fri, May 9, 2008 at 4:29 AM, Matt Mahoney [EMAIL PROTECTED] wrote: I claim there is no P such that P(P,y) = P(y) for all y. (I assume you mean something like P((P,y))=P(y)). If P(s)=0 (one answer to all questions), then P((P,y))=0 and P(y)=0 for all y. -- Vladimir Nesov [EMAIL PROTECTED]

Re: [agi] Accidental Genius

2008-05-09 Thread Mike Tintner
Right on. Everything I've read esp. Grandin, suggests strongly autism is crucially hypersensitivity rather than an emotional disorder. If every time the normal person touched someone, they got the equivalent of an electric shock, they'd stay away from people too. [Thanks for your previous

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Jim Bromer
- Original Message From: Matt Mahoney [EMAIL PROTECTED] --- Jim Bromer [EMAIL PROTECTED] wrote: I don't want to get into a quibble fest, but understanding is not necessarily constrained to prediction. What would be a good test for understanding an algorithm? -- Matt Mahoney,

RE: [agi] Re: pattern definition

2008-05-09 Thread John G. Rose
So many overloads - pattern, complexity, atoms - can't we come up with new terms like schfinkledorfs? - but a very interesting question is - given an image of W x H pixels of 1 bit depth (on or off), one frame, how many patterns exist within this grid? When you think about it, it becomes an

Re: [agi] Re: pattern definition

2008-05-09 Thread Boris Kazachenko
And it doesn't literally make much sense because your blog has a lot of generalizations with no examples - no individualizations/particularisations of, for example, what individual/particular problems your algorithms might apply to. The making sense level of your brain - an AGI that works - is

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Stephen Reed
Hi Matt, You asked: What would be a good test for understanding an algorithm? As I mentioned before, I want to create a system capable of being taught - specifically capable of being taught skills. And I strongly share your interest in answers to this question. A student should be able to

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Steve Richfield
Matt, On 5/9/08, Stephen Reed [EMAIL PROTECTED] wrote: Skill: Trimming the whitespace off both ends of a character string. One of the many annoyances of writing real-world AI programs is having to write this function; to replace the broken system functions that are supposed to do this, but

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Jim Bromer
On 5/9/08, Stephen Reed [EMAIL PROTECTED] wrote: Skill: Trimming the whitespace off both ends of a character string. One of the many annoyances of writing real-world AI programs is having to write this function; to replace the broken system functions that are supposed to do this, but which

Re: [agi] Re: pattern definition

2008-05-09 Thread Mike Tintner
Jim, I doubt that your specification equals my individualization. If I want to be able to recognize the individuals, Curtis/Brian/Carl/ and Billi Bromer,only images will do it: http://www.dunningmotorsales.com/IMAGES/people/Curtis%20Bromer.jpg

Re: [agi] Re: pattern definition

2008-05-09 Thread Joseph Henry
Mike, what is your stance on vector images? --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription:

[agi] organising parallel processes, try2

2008-05-09 Thread rooftop8000
I'll try to explain it more.. Suppose you have a lot of processes, all containing some production rule(s). They communicate with messages. They all should get cpu time somehow. Some processes just do low-level responses, some monitor other processes, etc. Some are involved in looking at the

Re: [agi] organising parallel processes, try2

2008-05-09 Thread Stephen Reed
Hi, The Texai system, as I envision its deployment, will have the following characteristics: * a lot of processes * a lot of hosts * message passing between processes, that are arranged in a hierarchical control system * higher level processes will be

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Matt Mahoney
--- Steve Richfield [EMAIL PROTECTED] wrote: Matt, On 5/8/08, Matt Mahoney [EMAIL PROTECTED] wrote: --- Steve Richfield [EMAIL PROTECTED] wrote: On 5/7/08, Vladimir Nesov [EMAIL PROTECTED] wrote: See http://www.overcomingbias.com/2008/01/newcombs-proble.html After

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Matt Mahoney
--- Vladimir Nesov [EMAIL PROTECTED] wrote: On Fri, May 9, 2008 at 4:29 AM, Matt Mahoney [EMAIL PROTECTED] wrote: I claim there is no P such that P(P,y) = P(y) for all y. (I assume you mean something like P((P,y))=P(y)). If P(s)=0 (one answer to all questions), then P((P,y))=0 and

[agi] Self-maintaining Architecture first for AI

2008-05-09 Thread William Pearson
After getting completely on the wrong foot last time I posted something, and not having had time to read the papers I should have. I have decided to try and start afresh and outline where I am coming from. I'll get around to do a proper paper later. There are two possible modes for designing a

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Vladimir Nesov
On Sat, May 10, 2008 at 2:09 AM, Matt Mahoney [EMAIL PROTECTED] wrote: --- Vladimir Nesov [EMAIL PROTECTED] wrote: (I assume you mean something like P((P,y))=P(y)). If P(s)=0 (one answer to all questions), then P((P,y))=0 and P(y)=0 for all y. You're right. But we wouldn't say that the

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Stan Nilsen
Matt, You asked What would be a good test for understanding an algorithm? Thanks for posing this question. It has been a good exercise. Assuming that the key word here is understanding rather than algorithm, I submit - A test of understanding is if one can give a correct *explanation* for

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Matt Mahoney
--- Vladimir Nesov [EMAIL PROTECTED] wrote: On Sat, May 10, 2008 at 2:09 AM, Matt Mahoney [EMAIL PROTECTED] wrote: --- Vladimir Nesov [EMAIL PROTECTED] wrote: (I assume you mean something like P((P,y))=P(y)). If P(s)=0 (one answer to all questions), then P((P,y))=0 and P(y)=0 for

Re: [agi] Self-maintaining Architecture first for AI

2008-05-09 Thread Richard Loosemore
William Pearson wrote: After getting completely on the wrong foot last time I posted something, and not having had time to read the papers I should have. I have decided to try and start afresh and outline where I am coming from. I'll get around to do a proper paper later. There are two possible