On 8/8/06, J. Andrew Rogers [EMAIL PROTECTED] wrote:C'mon, the brain is not so dumb. Which is precisely why it does not retain patterns more complex than
is strictly necessary to get the job done.The most efficient representation of pi, for almost all practical purposes, is as a sequence of
On 8/7/06, Pei Wang [EMAIL PROTECTED] wrote:
At the beginning, I also believed that first-order predicate logic (FOPL) plus probability theory and fuzzy logic is the way to go, like many others in the field. It is only after I ran into many problems,
that I began to build my alternative, NARS.
On 8/6/06, Pei Wang [EMAIL PROTECTED] wrote: If you just want an advanced production system, why bother to build your own, but not to simply use Soar or ACT-R? Both of them allow you
to define your own rules.
Indeed, when Allen Newell designed Soar, he's meant it to be a unified cognitive
On 8/7/06, J. Andrew Rogers [EMAIL PROTECTED] wrote: On Aug 5, 2006, at 1:05 PM, Yan King Yin wrote: Suppose a person has a definition of pi in his mind, but we don't
know if it's the correct one.But if he succeeds in telling us many digits of pi that are correct, then it is overwhelmingly
On 8/7/06, Yan King Yin [EMAIL PROTECTED] wrote:
On 8/6/06, Pei Wang [EMAIL PROTECTED] wrote:
If you just want an advanced production system, why bother to build
your own, but not to simply use Soar or ACT-R? Both of them allow you
to define your own rules.
Indeed, when Allen Newell
On Aug 7, 2006, at 3:30 AM, Yan King Yin wrote:
On 8/7/06, J. Andrew Rogers [EMAIL PROTECTED] wrote:
Or even more likely that his definition is a memorized sequence of
digits.
C'mon, the brain is not so dumb.
Which is precisely why it does not retain patterns more complex than
is
If you just want an advanced production system, why bother to build
your own, but not to simply use Soar or ACT-R? Both of them allow you
to define your own rules.
Pei
On 8/5/06, Yan King Yin [EMAIL PROTECTED] wrote:
Indeed, the AGI model that I have in mind is basically a production-rule
Yan King Yin wrote:
I'm not sure what exactly are your ideas for the mechanisms of model
and constraints, but in an AGI I think we can simply use predicate
logic (or equivalently, conceptual graphs) to represent thoughts. I'd
even go further to say that the brain actually uses /symbolic/
On Aug 5, 2006, at 1:05 PM, Yan King Yin wrote:
Suppose a person has a definition of pi in his mind, but we don't
know if it's the correct one. But if he succeeds in telling us
many digits of pi that are correct, then it is overwhelmingly
likely that he has got the correct definition,
Yan King Yin wrote:
On 8/5/06, Ben Goertzel [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
No. IMO, a simple rule like this does not correctly capture human
usage of qualifiers across contexts, and is not adequate for AI
purposes
Perhaps this rule is a decent high-level
Yan King Yin wrote:
...
2. If you think your method is better, the mechanism underlying your
rule might be more complex than predicate logic. That's kind of strange.
YKY
Not strange at all. The brain had a long evolutionary history before
language was ever created. Languages are attempts
On 8/6/06, Charles D Hixson [EMAIL PROTECTED] wrote:
Not strange at all.The brain had a long evolutionary history before language was ever created.Languages are attempts to model parts of the organization of the brain (and NOT attempts at a complete modeling).
Therefore it's reasonable to
On 8/5/06, Yan King Yin [EMAIL PROTECTED] wrote:
I think the brain is actually quite smart, perhaps due to intense selection
for intelligence over a long period of time dating back to fishes. I
suspect that the brain actually has an internal representation somewhat
similar to predicate logic.
On 8/6/06, Richard Loosemore [EMAIL PROTECTED] wrote: I too am a little puzzled by Ben's reservations here. Is it because Yan implied that the rule would be applied literally, and
therefore it would be fragile (e.g. there might be a case where the threshold for significantly was missed by a
On 8/6/06, Pei Wang [EMAIL PROTECTED] wrote:
I think the brain is actually quite smart, perhaps due to intense selection for intelligence over a long period of time dating back to fishes.I suspect that the brain actually has an internal representation somewhat
similar to predicate logic.
I tend to agree with Richard's view and I may build an AGI with symbolic, non-numericalinference.
1. As Russell pointed out, if the priors are not knownor are in extremely low precision,Bayes ruleis not very applicable. Number crunching with priors of 1-2 bits precision is garbage in, garbage
Richard,
Thanks for taking the time to explain your position. I actually agree
with most what you wrote, though I don't think it is inconsistent with
my point, that is, beliefs do need numerical truth values.
Let me explain briefly (I have to leave soon). In an AGI system (as
least in mine), a
YKY:
(1) Your worry about the Bayesian approach is reasonable, but it is
not the only possible way to use numerical truth value --- even Ben
will agree with me here. ;-)
(2) Accuracy is not a big problem, but if you do some experiments on
incremental learning, you will soon see that 1-2 digits
Hi,
It's easy enough to write out algebraic rules for manipulating fuzzy
qualifiers like very likely, may, and so forth. It may well be
that the human mind uses abstract, intuitive, algebraic-like rules for
manipulating these, instead of or in parallel to more quantitative
methods...
However,
Pei
I think we are very much in agreement, though perhaps our main
difference is in the emphasis, and the exact role played by the
numerical truth value I certainly want to emphasize that I think
this *is* calculated sometimes. (And I agree that it is not really
equivalent to a
Let me reply to everyone here...
Pei: You said non-numericheuristics (such as endorsement theory) may run into problems. Yes, but I believe those problems can be solved using further heuristics (eg see wikipedia article on Nixon diamond). If you resolve the Nixon diamond by referring to
Ben: I think the problem of contextuality may be solved like this:
Examples:
John and Mary have many kids. (like, 10)
This Chinese restuarant has many customers. (like 100s)
Many people in Africa have AIDs. (like 10s of millions)
so I propose a rule like this:
IF
n is significantly the
On 8/4/06, Yan King Yin [EMAIL PROTECTED] wrote:
Now, figuring out all the heuristical NTV /
symbolic qualifier'supdate rules, such thatan AGI will
always be internally consistent, and provably increasing
in accuracy, is a very non-trivial task.
Well indeed it is of course impossible, no matter
No. IMO, a simple rule like this does not correctly capture human
usage of qualifiers across contexts, and is not adequate for AI
purposes
Perhaps this rule is a decent high-level approximation, but AGI
requires better...
-- Ben
On 8/4/06, Yan King Yin [EMAIL PROTECTED] wrote:
Ben:
On 8/5/06, Russell Wallace [EMAIL PROTECTED] wrote:
Now, figuring out all the heuristical NTV / symbolic qualifier'supdate rules, such thatan AGI will always be internally consistent, and provably increasing in accuracy, is a very non-trivial task.
Well indeed it is of course impossible, no
On 8/3/06, Ben Goertzel [EMAIL PROTECTED] wrote:
Hi,On 8/2/06, Pei Wang [EMAIL PROTECTED] wrote: Short answer: (1) AGI needs to allow fuzzy concept, and to handle fuzziness properly,Agreed:
e.g. fuzzy modifiers like more, very, many, some etc. must behandled by an AGI systemYeah, and I'd think
Yeah, and I'd think modifiers like many are easily handled by a
probability distribution determined by the context over integers. Easily at
least in theory that is since the details of choosing an appropriate
distribution in any given context might be a bit tricky.
Right, but the question is,
No matter how bad fuzzy logic is, it cannot be responsible for the
past failures of AI --- fuzzy logic has never been popular in the AI
community. Actually, numerical approaches have been criticized and
rejected by similar reasons from the very beginning, until the coming
of the Bayesian
YKY
1) I agree that the brain's probabilistic reasoning does not involve
high-precision calculations, but rather rough heuristic estimations
2) Of course, the brain has a LOT of stuff going on internally that is
not accessible to consciousness In very many ways our unconscious
brains are
When you think something is more likely or less likely, you're
translating a feeling into English. The English translation doesn't
involve verbal probabilities like 0.6 or 0.8 - the syllables
probability zero point eight don't flow through your auditory
workspace. But that doesn't rule out
On 8/3/06, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
When you think something is more likely or less likely, you'retranslating a feeling into English.The English translation doesn'tinvolve verbal probabilities like 0.6 or
0.8 - the syllablesprobability zero point eight don't flow through your
Thanks for the thoughtful responses, folks. I have a few replies.
Pei Wang wrote:
No matter how bad fuzzy logic is, it cannot be responsible for the
past failures of AI --- fuzzy logic has never been popular in the AI
community.
Oh, no doubt about it: but fuzzy logic by itself was not the
Hi,
On 8/2/06, Pei Wang [EMAIL PROTECTED] wrote:
Short answer: (1) AGI needs to allow fuzzy concept, and to handle
fuzziness properly,
Agreed: e.g. fuzzy modifiers like more, very, many, some etc. must be
handled by an AGI system, along with fuzzy membership statements like
Fido is a member
33 matches
Mail list logo