2008/7/3 Terren Suydam [EMAIL PROTECTED]:
--- On Wed, 7/2/08, William Pearson [EMAIL PROTECTED] wrote:
Evolution! I'm not saying your way can't work, just
saying why I short
cut where I do. Note a thing has a purpose if it is useful
to apply
the design stance* to it. There are two things to
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
On Thu, Jul 3, 2008 at 12:59 AM, William Pearson [EMAIL PROTECTED] wrote:
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
On Wed, Jul 2, 2008 at 9:09 PM, William Pearson [EMAIL PROTECTED] wrote:
They would get less credit from the human supervisor. Let me
On Thu, Jul 3, 2008 at 10:45 AM, William Pearson [EMAIL PROTECTED] wrote:
Nope. I don't include B in A because if A' is faulty it can cause
problems to whatever is in the same vmprogram as it, by overwriting
memory locations. A' being a separate vmprogram means it is insulated
from the B and
2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
On Thu, Jul 3, 2008 at 10:45 AM, William Pearson [EMAIL PROTECTED] wrote:
Nope. I don't include B in A because if A' is faulty it can cause
problems to whatever is in the same vmprogram as it, by overwriting
memory locations. A' being a separate
On Thu, Jul 3, 2008 at 4:05 PM, William Pearson [EMAIL PROTECTED] wrote:
2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
On Thu, Jul 3, 2008 at 10:45 AM, William Pearson [EMAIL PROTECTED] wrote:
Nope. I don't include B in A because if A' is faulty it can cause
problems to whatever is in the same
Sorry about the long thread jack
2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
On Thu, Jul 3, 2008 at 4:05 PM, William Pearson [EMAIL PROTECTED] wrote:
Because it is dealing with powerful stuff, when it gets it wrong it
goes wrong powerfully. You could lock the experimental code away in a
sand
Ed Porter wrote:
WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?
Here is an important practical, conceptual problem I am having trouble
with.
In an article entitled “Are Cortical Models Really Bound by the ‘Binding
Problem’? ” Tomaso Poggio’s group at MIT takes
In general I agree with Richard Loosemore's reply.
Also, I think that it is not surprising that the approaches referred
to (gen/comp hierarchies, Hinton's hierarchies, hierarchical-temporal
memory, and many similar approaches) become too large if we try to use
them for more than the first few
On Thu, Jul 3, 2008 at 9:36 PM, William Pearson [EMAIL PROTECTED] wrote:
Sorry about the long thread jack
2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
On Thu, Jul 3, 2008 at 4:05 PM, William Pearson [EMAIL PROTECTED] wrote:
Because it is dealing with powerful stuff, when it gets it wrong it
2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
On Thu, Jul 3, 2008 at 9:36 PM, William Pearson [EMAIL PROTECTED] wrote:
Sorry about the long thread jack
2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
On Thu, Jul 3, 2008 at 4:05 PM, William Pearson [EMAIL PROTECTED] wrote:
Because it is dealing with
On Thursday 03 July 2008 11:14:15 am Vladimir Nesov wrote:
On Thu, Jul 3, 2008 at 9:36 PM, William Pearson [EMAIL PROTECTED]
wrote:...
I know this doesn't have the properties you would look for in a
friendly AI set to dominate the world. But I think it is similar to
the way humans work,
William and Vladimir,
IMHO this discussion is based entirely on the absence of any sort of
interface spec. Such a spec is absolutely necessary for a large AGI project
to ever succeed, and such a spec could (hopefully) be wrung out to at least
avoid the worst of the potential traps.
For example:
On Wed, Jul 2, 2008 at 5:31 AM, Terren Suydam [EMAIL PROTECTED] wrote:
Nevertheless, generalities among different instances of complex systems have
been identified, see for instance:
http://en.wikipedia.org/wiki/Feigenbaum_constants
To be sure, but there are also plenty of complex systems
That may be true, but it misses the point I was making, which was a response to
Richard's lament about the seeming lack of any generality from one complex
system to the next. The fact that Feigenbaum's constants describe complex
systems of different kinds is remarkable because it suggests an
Will,
Remember when I said that a purpose is not the same thing
as a goal?
The purpose that the system might be said to have embedded
is
attempting to maximise a certain signal. This purpose
presupposes no
ontology. The fact that this signal is attached to a human
means the
system as a
15 matches
Mail list logo