In my case (http://nars.wang.googlepages.com/), that scenario won't
happen --- it is impossible for the project to fail. ;-)
Seriously, if it happens, most likely it is because the control
process is too complicated to be handled properly by the designer's
mind. Or, it is possible that the
Dear Ben,
On 9/25/06, Ben Goertzel [EMAIL PROTECTED] wrote:
1) The design is totally workable but just requires much more hardware
than is currently available. (Our current estimates of hardware
requirements for powerful Novamente AGI are back-of-the-envelope
rather than rigorous
Hi,
Just out of curiosity - would you mind sharing your hardware estimates
with the list? I would personally find that fascinating.
Mant thanks,
Stefan
Well, here is one way to slice it... there are many, of course...
Currently the bottleneck for Novamente's cognitive processing is the
Looking at past and current (likely) failures
trying
to solve the wrong problem is the first place, or
not
having good enough theory/ approaches to solving the right problems, or
poor
implementation
However, even though you specifically restricted
your question
In my way of seeing things, when projects reach that "non
salvageable"
status, one is likely to find serious theoretical errors, possibly
made in the beginning of the journey. That's a problem we cannot
avoid, because we all don't know precisely what it is that we must
do to achieve general
Peter Voss mentioned trying to solve the wrong problem is the first place
as a source for failure in an AGI project. This was actually this first thing
that I thought of, and it brought to my mind a problem that I think of when
considering general intelligence theories--object permanence. Now, I
However, in the current day, I would say that we can list some principles
that any successful project must comply. Anyone want to start the list?
Sergio Navega.
Sergio,
While this is an interesting pursuit, I find it it much more difficult
than the already-hard problem of articulating some
From: Ben Goertzel [EMAIL PROTECTED]
However, in the current day, I would say that we can list some principles
that any successful project must comply. Anyone want to start the list?
Sergio Navega.
Sergio,
While this is an interesting pursuit, I find it it much more difficult
than the
From: J. Storrs Hall, PhD. [EMAIL PROTECTED]
(4) wasting time on symbol grounding. (this wouldn't be a problem for a
10-year, $50M project but the question was to us and for 5 years.) A
computer
has direct access to enough domains of discourse (such as basic math) that
there's no need to try
On Monday 25 September 2006 16:48, Ben Goertzel wrote:
My own view is that symbol grounding is not a waste of time ... but,
**exclusive reliance** on symbol grounding is a waste of time.
It's certainly not a waste of time in the general sense, especially if you're
going to be building a robot!
Ben, I take it you're using the word hypergraph in the strict mathematical sense. What do you gain from a hypergraph over an ordinary graph, in terms of representability, say?To return to the topic, didn't Minsky say that 'the trick is that there is no trick'? I doubt there's any single point of
On 9/26/06, Ben Goertzel [EMAIL PROTECTED] wrote:
But, what I would say in response to you is: If you presume a **bad**KR format, you can't match it with a learning mechanism that reliablyfills one's knowledge repository with knowledge...If you presume a sufficiently and appropriately flexible KR
Ben Goertzel wrote:
Hi,
The real grounding problem is the awkward and annoying fact that if
you presume a KR format, you can't reverse engineer a learning mechanism
that reliably fills that KR with knowledge.
Sure...
To go back to the source, in
13 matches
Mail list logo