Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-06 Thread William Pearson
2008/9/5 Mike Tintner [EMAIL PROTECTED]: MT:By contrast, all deterministic/programmed machines and computers are guaranteed to complete any task they begin. Will:If only such could be guaranteed! We would never have system hangs, dead locks. Even if it could be made so, computer systems

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-06 Thread Mike Tintner
Will, Yes, humans are manifestly a RADICALLY different machine paradigm- if you care to stand back and look at the big picture. Employ a machine of any kind and in general, you know what you're getting - some glitches (esp. with complex programs) etc sure - but basically, in general, it

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-06 Thread Mike Tintner
Sorry - para Our unreliability .. should have contined.. Our unreliabilty is the negative flip-side of our positive ability to stop an activity at any point, incl. the beginning and completely change tack/ course or whole approach, incl. the task itself, and even completely contradict

RE: Language modeling (was Re: [agi] draft for comment)

2008-09-06 Thread John G. Rose
Thinking out loud here as I find the relationship between compression and intelligence interesting: Compression in itself has the overriding goal of reducing storage bits. Intelligence has coincidental compression. There is resource management there. But I do think that it is not ONLY

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-06 Thread William Pearson
2008/9/6 Mike Tintner [EMAIL PROTECTED]: Will, Yes, humans are manifestly a RADICALLY different machine paradigm- if you care to stand back and look at the big picture. Employ a machine of any kind and in general, you know what you're getting - some glitches (esp. with complex programs) etc

RE: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-06 Thread Derek Zahn
It has been explained many times to Tintner that even though computer hardware works with a particular set of primitive operations running in sequence, a hardwired set of primitive logical operations operating in sequence is NOT the theory of intelligence that any AGI researchers are proposing

Re: AI isn't cheap (was Re: Real vs. simulated environments (was Re: [agi] draft for comment.. P.S.))

2008-09-06 Thread Steve Richfield
Matt, I heartily disagree with your view as expressed here, and as stated to my by heads of CS departments and other high ranking CS PhDs, nearly (but not quite) all of whom have lost the fire in the belly that we all once had for CS/AGI. I DO agree that CS is like every other technological

Re: Language modeling (was Re: [agi] draft for comment)

2008-09-06 Thread Matt Mahoney
--- On Fri, 9/5/08, Pei Wang [EMAIL PROTECTED] wrote: Thanks for taking the time to explain your ideas in detail. As I said, our different opinions on how to do AI come from our very different understanding of intelligence. I don't take passing Turing Test as my research goal (as explained

RE: Language modeling (was Re: [agi] draft for comment)

2008-09-06 Thread Matt Mahoney
--- On Sat, 9/6/08, John G. Rose [EMAIL PROTECTED] wrote: Compression in itself has the overriding goal of reducing storage bits. Not the way I use it. The goal is to predict what the environment will do next. Lossless compression is a way of measuring how well we are doing. -- Matt Mahoney,

Re: Language modeling (was Re: [agi] draft for comment)

2008-09-06 Thread Pei Wang
I won't argue against your preference test here, since this is a big topic, and I've already made my position clear in the papers I mentioned. As for compression, yes every intelligent system needs to 'compress' its experience in the sense of keeping the essence but using less space. However, it

Re: AI isn't cheap (was Re: Real vs. simulated environments (was Re: [agi] draft for comment.. P.S.))

2008-09-06 Thread Matt Mahoney
Steve, where are you getting your cost estimate for AGI? Is it a gut feeling, or something like the common management practice of I can afford $X so it will cost $X? My estimate of $10^15 is based on the value of the world economy, US $66 trillion per year and increasing 5% annually over the

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-06 Thread Mike Tintner
DZ:AGI researchers do not think of intelligence as what you think of as a computer program -- some rigid sequence of logical operations programmed by a designer to mimic intelligent behavior. 1. Sequence/Structure. The concept I've been using is not that a program is a sequence of operations

Re: Language modeling (was Re: [agi] draft for comment)

2008-09-06 Thread Matt Mahoney
--- On Sat, 9/6/08, Pei Wang [EMAIL PROTECTED] wrote: As for compression, yes every intelligent system needs to 'compress' its experience in the sense of keeping the essence but using less space. However, it is clearly not loseless. It is even not what we usually call loosy compression,