2008/9/5 Mike Tintner [EMAIL PROTECTED]:
MT:By contrast, all deterministic/programmed machines and computers are
guaranteed to complete any task they begin.
Will:If only such could be guaranteed! We would never have system hangs,
dead locks. Even if it could be made so, computer systems
Will,
Yes, humans are manifestly a RADICALLY different machine paradigm- if you
care to stand back and look at the big picture.
Employ a machine of any kind and in general, you know what you're getting -
some glitches (esp. with complex programs) etc sure - but basically, in
general, it
Sorry - para Our unreliability .. should have contined..
Our unreliabilty is the negative flip-side of our positive ability to stop
an activity at any point, incl. the beginning and completely change tack/
course or whole approach, incl. the task itself, and even completely
contradict
Thinking out loud here as I find the relationship between compression and
intelligence interesting:
Compression in itself has the overriding goal of reducing storage bits.
Intelligence has coincidental compression. There is resource management
there. But I do think that it is not ONLY
2008/9/6 Mike Tintner [EMAIL PROTECTED]:
Will,
Yes, humans are manifestly a RADICALLY different machine paradigm- if you
care to stand back and look at the big picture.
Employ a machine of any kind and in general, you know what you're getting -
some glitches (esp. with complex programs) etc
It has been explained many times to Tintner that even though computer hardware
works with a particular set of primitive operations running in sequence, a
hardwired set of primitive logical operations operating in sequence is NOT the
theory of intelligence that any AGI researchers are proposing
Matt,
I heartily disagree with your view as expressed here, and as stated to my by
heads of CS departments and other high ranking CS PhDs, nearly (but not
quite) all of whom have lost the fire in the belly that we all once had
for CS/AGI.
I DO agree that CS is like every other technological
--- On Fri, 9/5/08, Pei Wang [EMAIL PROTECTED] wrote:
Thanks for taking the time to explain your ideas in detail.
As I said,
our different opinions on how to do AI come from our very
different
understanding of intelligence. I don't take
passing Turing Test as
my research goal (as explained
--- On Sat, 9/6/08, John G. Rose [EMAIL PROTECTED] wrote:
Compression in itself has the overriding goal of reducing
storage bits.
Not the way I use it. The goal is to predict what the environment will do next.
Lossless compression is a way of measuring how well we are doing.
-- Matt Mahoney,
I won't argue against your preference test here, since this is a
big topic, and I've already made my position clear in the papers I
mentioned.
As for compression, yes every intelligent system needs to 'compress'
its experience in the sense of keeping the essence but using less
space. However, it
Steve, where are you getting your cost estimate for AGI? Is it a gut feeling,
or something like the common management practice of I can afford $X so it will
cost $X?
My estimate of $10^15 is based on the value of the world economy, US $66
trillion per year and increasing 5% annually over the
DZ:AGI researchers do not think of intelligence as what you think of as a
computer program -- some rigid sequence of logical operations programmed by a
designer to mimic intelligent behavior.
1. Sequence/Structure. The concept I've been using is not that a program is a
sequence of operations
--- On Sat, 9/6/08, Pei Wang [EMAIL PROTECTED] wrote:
As for compression, yes every intelligent
system needs to 'compress'
its experience in the sense of keeping the essence
but using less
space. However, it is clearly not loseless. It is
even not what we
usually call loosy compression,
13 matches
Mail list logo