On 8/3/06, Ben Goertzel [EMAIL PROTECTED] wrote:
Hi,On 8/2/06, Pei Wang [EMAIL PROTECTED] wrote: Short answer: (1) AGI needs to allow fuzzy concept, and to handle fuzziness properly,Agreed:
e.g. fuzzy modifiers like more, very, many, some etc. must behandled by an AGI systemYeah, and I'd think
Yeah, and I'd think modifiers like many are easily handled by a
probability distribution determined by the context over integers. Easily at
least in theory that is since the details of choosing an appropriate
distribution in any given context might be a bit tricky.
Right, but the question is,
No matter how bad fuzzy logic is, it cannot be responsible for the
past failures of AI --- fuzzy logic has never been popular in the AI
community. Actually, numerical approaches have been criticized and
rejected by similar reasons from the very beginning, until the coming
of the Bayesian
YKY
1) I agree that the brain's probabilistic reasoning does not involve
high-precision calculations, but rather rough heuristic estimations
2) Of course, the brain has a LOT of stuff going on internally that is
not accessible to consciousness In very many ways our unconscious
brains are
When you think something is more likely or less likely, you're
translating a feeling into English. The English translation doesn't
involve verbal probabilities like 0.6 or 0.8 - the syllables
probability zero point eight don't flow through your auditory
workspace. But that doesn't rule out
On 8/3/06, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
When you think something is more likely or less likely, you'retranslating a feeling into English.The English translation doesn'tinvolve verbal probabilities like 0.6 or
0.8 - the syllablesprobability zero point eight don't flow through your
Thanks for the thoughtful responses, folks. I have a few replies.
Pei Wang wrote:
No matter how bad fuzzy logic is, it cannot be responsible for the
past failures of AI --- fuzzy logic has never been popular in the AI
community.
Oh, no doubt about it: but fuzzy logic by itself was not the