Hi John,
Re your idea that there should be an intermediate-level representation:
1. Obviously, we do not currently know how the brain stores that
representation. Things get insanely complex as neuroscientists go higher up
the visual pathways from the primary visual cortex.
2. I advocate
On 3/8/07, Matt Mahoney [EMAIL PROTECTED] wrote:
[re: logical abduction for interpretation of natural language]
One disadvantage of this approach is that you have to hand code lots of
language knowledge. They don't seem to have solved the problem of
acquiring
such knowledge from training
YKY (Yan King Yin) wrote:
Hi John,
Re your idea that there should be an intermediate-level representation:
1. Obviously, we do not currently know how the brain stores that
representation. Things get insanely complex as neuroscientists
go higher up the visual pathways from the primary
discoveries tend to make me believe
that the human brain does itself have (or indeed, is) an internal simulation
world.
- Original Message -
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, March 10, 2007 9:19 AM
Subject: Re: [agi] The Missing Piece
John
Mark Waser wrote:
In the Novamente design this is dealt with via a currently
unimplemented aspect of the design called the internal simulation
world. This is a very non-human-brain-like approach
Why do you believe that this is a very non-human-brain-like
approach? Mirror neurons and many
On 3/10/07, Ben Goertzel [EMAIL PROTECTED] wrote:
In a sense we do, but it's not implemented in the brain as an actual sim
world with a physics engine and so forth
Yes it is, or at least a reasonable facsimile thereof.
... our internal sim world is a
lot less physically accurate (more
On Sat, Mar 10, 2007 at 10:11:19AM -0500, Ben Goertzel wrote:
In a sense we do, but it's not implemented in the brain as an actual sim
world with a physics engine and so forth ... our internal sim world is a
I'm not sure we know how it's implemented. A lot of things are done
by topographic
My philosophy of AI has never been logic-based or neural-based. I did explore
neural nets during the neural-net mania of the nineties. I did a lot of
reading, and experimented with some with feedforward nets I wrote using
simulated annealing and backpropagation (which never did work very
On 2/19/07, John Scanlon [EMAIL PROTECTED] wrote:
[...]
Logical deduction or inference is not thought. It is mechanical symbol
manipulation that can can be programmed into any scientific pocket
calculator.
[...]
Hi John,
I admire your attitude for attacking the core AI issues =)
One is
On 3/2/07, Matt Mahoney [EMAIL PROTECTED] wrote:
What about English? Irregular grammar is only a tiny part of the language
modeling problem. Uaing an artificial language with a regular grammar to
simplify the problem is a false path. If people actually used Logban
then
it would be used in
On Wednesday 07 March 2007 10:34, YKY (Yan King Yin) wrote:
I discovered something cool: computational pragmatics. You may take a
look at Jerry R Hobbs' paper: Interpretation as Abduction, ...
Nice. Note that one of the reasons that I'm going the numerical route is that
some powerful methods
--- YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
On 3/2/07, Matt Mahoney [EMAIL PROTECTED] wrote:
What about English? Irregular grammar is only a tiny part of the language
modeling problem. Uaing an artificial language with a regular grammar to
simplify the problem is a false path. If
Hmmm, if you could put on some basic rules on the randomness(in a database
of Lojban that gives a random statement or series of statements), say to
accept logical statements that could then be applied onto input. So say you
same something like le MLAtu cu GLEki (the cat is happy) and later make
--- Andrii (lOkadin) Zvorygin [EMAIL PROTECTED] wrote:
Hmmm, if you could put on some basic rules on the randomness(in a database
of Lojban that gives a random statement or series of statements), say to
accept logical statements that could then be applied onto input. So say you
same
Do they tell us what grief is doing when a loved one dies?
Well the grief that is felt when a loved one dies is similar to that of
unreturned love. So you love them, and they don't love you back -- as they
are dead. This causes a feeling of futility and eventually changes direction
-- to focus
The key to life the universe and everything:
All things can be expressed using any Universal Computer
You are a Universal Computer (one that can read(remmember/imagine),
write(experience), erase(forget)).
All the things you believe/know/understand are true.
I believe the key to AI rests in
I've actually been in really different universes. Where you could write text
and it would do as you instructed. I tried checking out the filesystem but
it was barren and bin was empty *shrugs*.
Like I said, You don't have to believe me if you don't want to. I am but
another one of your
On Tue, 20 Feb 2007, Richard Loosemore wrote:
) Bo Morgan wrote:
) On Tue, 20 Feb 2007, Richard Loosemore wrote:
)
) In regard to your comments about complexity theory: from what I understand,
) it is primarily about taking simple physics models and trying to explain
) complicated datasets
Richard Loosemore wrote:
Ben Goertzel wrote:
It's pretty clear that humans don't run FOPC as a native code, but
that we can learn it as a trick.
I disagree. I think that Hebbian learning between cortical columns
is essentially equivalent to basic probabilistic term logic.
Is there anyone out there who has a sense that most of the work being done in
AI is still following the same track that has failed for fifty years now? The
focus on logic as thought, or neural nets as the bottom-up, brain-imitating
solution just isn't getting anywhere? It's the same thing,
On Mon, 19 Feb 2007, John Scanlon wrote:
) Is there anyone out there who has a sense that most of the work being
) done in AI is still following the same track that has failed for fifty
) years now? The focus on logic as thought, or neural nets as the
) bottom-up, brain-imitating solution
John Scanlon wrote:
Is there anyone out there who has a sense that most of the work being
done in AI is still following the same track that has failed for fifty
years now? The focus on logic as thought, or neural nets as the
bottom-up, brain-imitating solution just isn't getting anywhere?
On 2/19/07, Bo Morgan [EMAIL PROTECTED] wrote:
On Mon, 19 Feb 2007, John Scanlon wrote:
) Is there anyone out there who has a sense that most of the work being
) done in AI is still following the same track that has failed for fifty
) years now? The focus on logic as thought, or neural nets
It's pretty clear that humans don't
run FOPC as a native code, but that we can learn it as a trick.
I disagree. I think that Hebbian learning between cortical columns is
essentially equivalent to basic probabilistic
term logic.
Lower-level common-sense inferencing of the
On Monday 19 February 2007 16:08, Ben Goertzel wrote:
It's pretty clear that humans don't
run FOPC as a native code, but that we can learn it as a trick.
I disagree. I think that Hebbian learning between cortical columns is
essentially equivalent to basic probabilistic
term logic.
Bo Morgan wrote:
On Mon, 19 Feb 2007, Richard Loosemore wrote:
) Bo Morgan wrote:
)
) On Mon, 19 Feb 2007, John Scanlon wrote:
)
) ) Is there anyone out there who has a sense that most of the work being
) ) done in AI is still following the same track that has failed for
) ) fifty years
Ben Goertzel wrote:
It's pretty clear that humans don't run FOPC as a native code, but
that we can learn it as a trick.
I disagree. I think that Hebbian learning between cortical columns is
essentially equivalent to basic probabilistic term logic.
Lower-level common-sense inferencing
working on.
- Original Message -
From: Eliezer S. Yudkowsky
To: agi@v2.listbox.com
Sent: Monday, February 19, 2007 9:12 PM
Subject: Re: [agi] The Missing Piece
John Scanlon wrote:
Is there anyone out there who has a sense that most of the work being
done in AI is still
28 matches
Mail list logo