Benjamin Goertzel wrote:
And, importance levels need to be context-dependent, so that assigning
them requires sophisticated inference in itself...
The problem may not be so serious. Common sense reasoning may
require only
*shallow* inference chains, eg 5 applications of rules. So I'm
Hi,
Possibly this could be approached by partitioning the rule-set into
small chunks of rules that work together, so that one didn't end up
trying everything against everything else. These chunks of rules
might well be context dependent, so that one would use different chunks
at a dinner table
On Sat, 20 Jan 2007 00:32:18 -0500, Benjamin Goertzel [EMAIL PROTECTED]
wrote:
I'm not sure exchangeability implies Chaitin randomness.
Yeah, you're right, this statement needs qualification -- it wasn;t
quite right as stated. You're right that a binary series formed by
tossing a weighted
Benjamin Goertzel wrote:
Hi,
Possibly this could be approached by partitioning the rule-set into
small chunks of rules that work together, so that one didn't end up
trying everything against everything else. These chunks of rules
might well be context dependent, so that one would use
This author makes a distinction, similar to the one in my mind, between
algorithmic and intuitive randomness.
===
We can say that a sequence is algorithmically random if it has an amount
of algorithmic information approximately equal to its length. Note that
this is related to, but not
Forgot to say: If anyone has found similarly informative video's regarding
cognitive computing or AGI in general, I'm very interested.
On 1/20/07, Kingma, D.P. [EMAIL PROTECTED] wrote:
(lmaden Institute Conference on Cognitive Computing)
Here's a video in which Donald Michie talks about the early years of AI
(GOFAI), beginning with his discussions with Alan Turing about building a
child machine.
http://www.aiai.ed.ac.uk/events/ccs2002/2002-10-11-michie.qtl
On 20/01/07, Kingma, D.P. [EMAIL PROTECTED] wrote:
Forgot to say:
--- gts [EMAIL PROTECTED] wrote:
We can imagine ourselves parsing the sequence, dividing it into two
groups: 1) complex/disorderly subsequences not amenable to simple
algorithmic derivation and 2) simple/orderly subsequences such as those
above that are so amenable.
Now, if I
Hi,
Now, if I understand Chaitin's information-theoretic compressibility
definition of randomness correctly (and I very likely do not), the
simple/orderly subsequences in group 2) are compressible and so would
count against the larger sequence in any compressibility measure of its
randomness.
On Sat, 20 Jan 2007 20:41:55 -0500, Matt Mahoney [EMAIL PROTECTED]
wrote:
Any information you save by compressing the compressible bits of a
random sequence is lost because you also have to specify the location of
those bits. (You can use the counting argument to prove this).
Ah, yes...
10 matches
Mail list logo