Will,
--- On Tue, 7/15/08, William Pearson [EMAIL PROTECTED] wrote:
And I would also say of evolved systems. My fingers purpose
could
equally well be said to be for picking ticks out of the
hair of my kin
or for touch typing. E.g. why do I keep my fingernails
short, so that
they do not
2008/7/14 Terren Suydam [EMAIL PROTECTED]:
Will,
--- On Fri, 7/11/08, William Pearson [EMAIL PROTECTED] wrote:
Purpose and goal are not intrinsic to systems.
I agree this is true with designed systems.
And I would also say of evolved systems. My fingers purpose could
equally well be said
Will,
--- On Fri, 7/11/08, William Pearson [EMAIL PROTECTED] wrote:
Purpose and goal are not intrinsic to systems.
I agree this is true with designed systems. The designed system is ultimately
an extension of the designer's mind, wherein lies the purpose. Of course, as
you note, the system
William,
On 7/7/08, William Pearson [EMAIL PROTECTED] wrote:
2008/7/3 Steve Richfield [EMAIL PROTECTED]:
William and Vladimir,
IMHO this discussion is based entirely on the absence of any sort of
interface spec. Such a spec is absolutely necessary for a large AGI
project
to ever
2008/7/3 Steve Richfield [EMAIL PROTECTED]:
William and Vladimir,
IMHO this discussion is based entirely on the absence of any sort of
interface spec. Such a spec is absolutely necessary for a large AGI project
to ever succeed, and such a spec could (hopefully) be wrung out to at least
avoid
Terren,
Remember when I said that a purpose is not the same thing
as a goal?
The purpose that the system might be said to have embedded
is
attempting to maximise a certain signal. This purpose
presupposes no
ontology. The fact that this signal is attached to a human
means the
system as a
Will,
--- On Fri, 7/4/08, William Pearson [EMAIL PROTECTED] wrote:
Does the following make sense? The purpose embedded within
the system
will be try and make the system not decrease in its ability
to receive
some abstract number.
The way I connect up the abstract number to the real world
2008/7/3 Terren Suydam [EMAIL PROTECTED]:
--- On Wed, 7/2/08, William Pearson [EMAIL PROTECTED] wrote:
Evolution! I'm not saying your way can't work, just
saying why I short
cut where I do. Note a thing has a purpose if it is useful
to apply
the design stance* to it. There are two things to
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
On Thu, Jul 3, 2008 at 12:59 AM, William Pearson [EMAIL PROTECTED] wrote:
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
On Wed, Jul 2, 2008 at 9:09 PM, William Pearson [EMAIL PROTECTED] wrote:
They would get less credit from the human supervisor. Let me
On Thu, Jul 3, 2008 at 10:45 AM, William Pearson [EMAIL PROTECTED] wrote:
Nope. I don't include B in A because if A' is faulty it can cause
problems to whatever is in the same vmprogram as it, by overwriting
memory locations. A' being a separate vmprogram means it is insulated
from the B and
2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
On Thu, Jul 3, 2008 at 10:45 AM, William Pearson [EMAIL PROTECTED] wrote:
Nope. I don't include B in A because if A' is faulty it can cause
problems to whatever is in the same vmprogram as it, by overwriting
memory locations. A' being a separate
On Thu, Jul 3, 2008 at 4:05 PM, William Pearson [EMAIL PROTECTED] wrote:
2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
On Thu, Jul 3, 2008 at 10:45 AM, William Pearson [EMAIL PROTECTED] wrote:
Nope. I don't include B in A because if A' is faulty it can cause
problems to whatever is in the same
Sorry about the long thread jack
2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
On Thu, Jul 3, 2008 at 4:05 PM, William Pearson [EMAIL PROTECTED] wrote:
Because it is dealing with powerful stuff, when it gets it wrong it
goes wrong powerfully. You could lock the experimental code away in a
sand
William and Vladimir,
IMHO this discussion is based entirely on the absence of any sort of
interface spec. Such a spec is absolutely necessary for a large AGI project
to ever succeed, and such a spec could (hopefully) be wrung out to at least
avoid the worst of the potential traps.
For example:
Will,
Remember when I said that a purpose is not the same thing
as a goal?
The purpose that the system might be said to have embedded
is
attempting to maximise a certain signal. This purpose
presupposes no
ontology. The fact that this signal is attached to a human
means the
system as a
Sorry about the late reply.
snip some stuff sorted out
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
On Tue, Jul 1, 2008 at 2:02 AM, William Pearson [EMAIL PROTECTED] wrote:
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
If internals are programmed by humans, why do you need automatic
system to
Terren,
This is going too far. We can reconstruct to a considerable extent how
humans think about problems - their conscious thoughts. Artists have been
doing this reasonably well for hundreds of years. Science has so far avoided
this, just as it avoided studying first the mind, with
On Wed, Jul 2, 2008 at 2:48 PM, William Pearson [EMAIL PROTECTED] wrote:
Okay let us clear things up. There are two things that need to be
designed, a computer architecture or virtual machine and programs that
form the initial set of programs within the system. Let us call the
internal
Mike,
This is going too far. We can reconstruct to a considerable
extent how humans think about problems - their conscious thoughts.
Why is it going too far? I agree with you that we can reconstruct thinking, to
a point. I notice you didn't say we can completely reconstruct how humans
Terren,
Obviously, as I indicated, I'm not suggesting that we can easily construct a
total model of human cognition. But it ain't that hard to reconstruct
reasonable and highly informative, if imperfect, models of how humans
consciously think about problems. As I said, artists have been
2008/7/2 Terren Suydam [EMAIL PROTECTED]:
Mike,
This is going too far. We can reconstruct to a considerable
extent how humans think about problems - their conscious thoughts.
Why is it going too far? I agree with you that we can reconstruct thinking,
to a point. I notice you didn't say
Mike,
That's a rather weak reply. I'm open to the possibility that my ideas are
incorrect or need improvement, but calling what I said nonsense without further
justification is just hand waving.
Unless you mean this as your justification:
Your conscious, inner thoughts are not that different
Will,
My plan is go for 3) Usefulness. Cognition is useful from
an
evolutionary point of view, if we try to create systems
that are
useful in the same situations (social, building world
models), then we
might one day stumble upon cognition.
Sure, that's a valid approach for creating
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
On Wed, Jul 2, 2008 at 2:48 PM, William Pearson [EMAIL PROTECTED] wrote:
Okay let us clear things up. There are two things that need to be
designed, a computer architecture or virtual machine and programs that
form the initial set of programs within
How do you assign credit to programs that are good at generating good
children? Particularly, could a program specialize in this, so that it
doesn't do anything useful directly but always through making highly
useful children?
On Wed, Jul 2, 2008 at 1:09 PM, William Pearson [EMAIL PROTECTED]
2008/7/2 Abram Demski [EMAIL PROTECTED]:
How do you assign credit to programs that are good at generating good
children?
I never directly assign credit, apart from the first stage. The rest
of the credit assignment is handled by the vmprograms, er,
programming.
Particularly, could a program
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
On Wed, Jul 2, 2008 at 9:09 PM, William Pearson [EMAIL PROTECTED] wrote:
They would get less credit from the human supervisor. Let me expand on
what I meant about the economic competition. Let us say vmprogram A
makes a copy of itself, called A', with
On Thu, Jul 3, 2008 at 12:59 AM, William Pearson [EMAIL PROTECTED] wrote:
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
On Wed, Jul 2, 2008 at 9:09 PM, William Pearson [EMAIL PROTECTED] wrote:
They would get less credit from the human supervisor. Let me expand on
what I meant about the economic
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
On Tue, Jul 1, 2008 at 8:31 AM, Linas Vepstas [EMAIL PROTECTED] wrote:
Why binary?
I once skimmed a biography of Ramanujan, he started
multiplying numbers in his head as a pre-teen. I suspect
it was grindingly boring, but given the surroundings,
On Tue, Jul 1, 2008 at 10:02 AM, Linas Vepstas [EMAIL PROTECTED] wrote:
What are you trying to accomplish here? I don't see where
you are trying to go with this.
I don't think a human can consciously train one or two neurons
to do something, we train millions at a time. -- I'm guessing
I was nearly kicked out of school in seventh grade for coming up with a method
of manipulating (multiplying, dividing) large numbers in my head using what I
later learned was a shift-reduce method. It was similar to this:
http://www.metacafe.com/watch/742717/human_calculator/
My seventh
2008/6/30 Terren Suydam [EMAIL PROTECTED]:
Hi Will,
--- On Mon, 6/30/08, William Pearson [EMAIL PROTECTED] wrote:
The only way to talk coherently about purpose within
the computation is to simulate self-organized, embodied
systems.
I don't think you are quite getting my system. If you
Terren:It's to make the larger point that we may be so immersed in our own
conceptualizations of intelligence - particularly because we live in our
models and draw on our own experience and introspection to elaborate them -
that we may have tunnel vision about the possibilities for better or
2008/7/1 Vladimir Nesov [EMAIL PROTECTED]:
On Tue, Jul 1, 2008 at 10:02 AM, Linas Vepstas [EMAIL PROTECTED] wrote:
What are you trying to accomplish here? I don't see where
you are trying to go with this.
I don't think a human can consciously train one or two neurons
to do something, we
Will,
I think the original issue was about purpose. In your system, since a human is
the one determining which programs are performing the best, the purpose is
defined in the mind of the human. Beyond that, it certainly sounds as if it is
a self-organizing system.
Terren
--- On Tue,
Hi Mike,
My points about the pitfalls of theorizing about intelligence apply to any and
all humans who would attempt it - meaning, it's not necessary to characterize
AI folks in one way or another. There are any number of aspects of intelligence
we could highlight that pose a challenge to
of computation.
And what is our mind but the weather in our brains?
Terren
--- On Sun, 6/29/08, Ben Goertzel [EMAIL PROTECTED] wrote:
From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN
AGI
To: agi@v2.listbox.com
Date: Sunday, June 29
PROTECTED] wrote:
From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE
IN AGI
To: agi@v2.listbox.com
Date: Sunday, June 29, 2008, 10:44 PM
Richard,
I think that it would be possible to formalize your
complex systems argument
On Mon, Jun 30, 2008 at 8:07 AM, Terren Suydam [EMAIL PROTECTED] wrote:
By the way, just wanted to point out a beautifully simple example - perhaps
the simplest - of an irreducibility in complex systems.
Individual molecular interactions are symmetric in time, they work the same
forwards
--- On Mon, 6/30/08, Ben Goertzel [EMAIL PROTECTED] wrote:
but I don't agree that predicting **which** AGI designs can lead
to the emergent properties corresponding to general intelligence,
is pragmatically impossible to do in an analytical and rational way ...
OK, I grant you that you may be
I agree that all designed systems have limitations, but I also suggest
that all evolved systems have limitations.
This is just the no free lunch theorem -- in order to perform better
than random search at certain optimization tasks, a system needs to
have some biases built in, and these biases
On Mon, Jun 30, 2008 at 8:07 AM, Terren Suydam [EMAIL PROTECTED] wrote:
By the way, just wanted to point out a beautifully simple example - perhaps
the simplest - of an irreducibility in complex systems.
Individual molecular interactions are symmetric in time, they work the same
forwards
On Mon, Jun 30, 2008 at 8:31 AM, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
P.S. The biggest issue that spoiled my joy of reading Permutation
City is that you cannot simulate dynamic systems ( = solve
numerically differential equations) out-of-order, you need to know
time t to compute time t+1
Ben,
I agree, an evolved design has limits too, but the key difference between a
contrived design and one that is allowed to evolve is that the evolved
critter's intelligence is grounded in the context of its own 'experience',
whereas the contrived one's intelligence is grounded in the
As far as I can tell, all you've done is give the irreducibility a name:
statistical mechanics. You haven't explained how the arrow of time emerges
from the local level to the global. Or, maybe I just don't understand it... can
you dumb it down for me?
Terren
--- On Mon, 6/30/08, Lukasz
2008/6/30 Terren Suydam [EMAIL PROTECTED]:
Ben,
I agree, an evolved design has limits too, but the key difference between a
contrived design and one that is allowed to evolve is that the evolved
critter's intelligence is grounded in the context of its own 'experience',
whereas the
On Mon, Jun 30, 2008 at 10:34 PM, William Pearson [EMAIL PROTECTED] wrote:
I'm seeking to do something half way between what you suggest (from
bacterial systems to human alife) and AI. I'd be curious to know
whether you think it would suffer from the same problems.
First are we agreed that
Hi William,
A Von Neumann computer is just a machine. It's only purpose is to compute. When
you get into higher-level purpose, you have to go up a level to the stuff being
computed. Even then, the purpose is in the mind of the programmer. The only way
to talk coherently about purpose within
Terren:One of the basic threads of scientific progress is the ceaseless
denigration of the idea that there is something special about humans
Not quite so. There is a great deal of exceptionalism in science - hence
evolutionary psychology actually only deals with human evolution. If there
were
Hello Terren
A Von Neumann computer is just a machine. It's only purpose is to compute.
When you get into higher-level purpose, you have to go up a level to the
stuff being computed. Even then, the purpose is in the mind of the programmer.
What I don't see is why your simulation gets away
Hi Mike,
Evidently I didn't communicate that so clearly because I agree with you 100%.
Terren
--- On Mon, 6/30/08, Mike Tintner [EMAIL PROTECTED] wrote:
Terren:One of the basic threads of scientific progress is
the ceaseless
denigration of the idea that there is something special
about
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
On Mon, Jun 30, 2008 at 10:34 PM, William Pearson [EMAIL PROTECTED] wrote:
I'm seeking to do something half way between what you suggest (from
bacterial systems to human alife) and AI. I'd be curious to know
whether you think it would suffer from
On Tue, Jul 1, 2008 at 1:31 AM, William Pearson [EMAIL PROTECTED] wrote:
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
It is a wrong level of organization: computing hardware is the physics
of computation, it isn't meant to implement specific algorithms, so I
don't quite see what you are
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
On Tue, Jul 1, 2008 at 1:31 AM, William Pearson [EMAIL PROTECTED] wrote:
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
It is a wrong level of organization: computing hardware is the physics
of computation, it isn't meant to implement specific
I wrote a book about the emergence of spontaneous creativity from
underlying complex dynamics. It was published in 1997 with the title
From Complexity to Creativity. Some of the material is dated but I
still believe the basic ideas make sense. Some of the main ideas were
reviewed in The Hidden
Hi Will,
--- On Mon, 6/30/08, William Pearson [EMAIL PROTECTED] wrote:
The only way to talk coherently about purpose within
the computation is to simulate self-organized, embodied
systems.
I don't think you are quite getting my system. If you
had a bunch of
programs that did the
Could you say that it takes a complex system to know a complex system? If an
AGI is going to try to say predict the weather, it doesn't have infinite cpu
cycles to simulate so it'll have to come up with something better. Sure it
can build a probabilistic historical model but that is kind of
Ben,
Be that as it may, spontaneous insight was just one example of an aspect of
human intelligence that's not well understood. I'll give you another one that
is more difficult to theorize about - I assume you've heard of the savant
Daniel Tammet who is able to do amazing feats of computation
2008/6/30 Terren Suydam [EMAIL PROTECTED]:
savant
I've always theorized that savants can do what they do because
they've been able to get direct access to, and train, a fairly
small number of neurons in their brain, to accomplish highly
specialized (and thus rather unusual) calculations.
I'm
Interesting: is it possible to train yourself to run a specially
designed nontrivial inference circuit based on low-base
transformations (e.g. binary)? You start by assigning unique symbols
to its nodes, train yourself to stably perform associations
implementing its junctions, and then assemble it
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
Interesting: is it possible to train yourself to run a specially
designed nontrivial inference circuit based on low-base
transformations (e.g. binary)?
Why binary?
I once skimmed a biography of Ramanujan, he started
multiplying numbers in his head
On Tue, Jul 1, 2008 at 8:31 AM, Linas Vepstas [EMAIL PROTECTED] wrote:
Why binary?
I once skimmed a biography of Ramanujan, he started
multiplying numbers in his head as a pre-teen. I suspect
it was grindingly boring, but given the surroundings, might
have been the most fun thing he could
: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE
IN AGI
Ed Porter wrote:
I do not claim the software architecture for AGI has been totally solved.
But I believe that enough good AGI approaches exist (and I think Novamente
is one) that when powerful hardware available to more
Ed:Another reason for optimism is Hintons new work described in papers such
as
Modeling image patches with a directed hierarchy of Markov random fields
by Simon Osindero and Geoffrey Hinton and the Google Tech Talk at
http://www.youtube.com/watch?v=AyzOUbkUf3M. Hinton has shown how to
Tintner [mailto:[EMAIL PROTECTED]
Sent: Sunday, June 29, 2008 2:48 PM
To: agi@v2.listbox.com
Cc: [EMAIL PROTECTED]
Subject: Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE
IN AGI
Ed:Another reason for optimism is Hintons new work described in papers such
as
Modeling image patches
I agree that the hardware advances are inspirational, and it seems possible
that just having huge hardware around could change the way people think and
encourage new ideas.
But what I'm really looking forward to is somebody producing a very impressive
general intelligence result that was just
Ben Goertzel wrote:
Richard,
So long as the general response to the complex systems problem is not This
could be a serious issue, let's put our heads together to investigate it,
but My gut feeling is that this is just not going to be a problem, or
Quit rocking the boat!, you can bet that
On Friday 27 June 2008, Richard Loosemore wrote:
Pardon my fury, but the problem is understanding HOW TO DO IT, and
HOW TO BUILD THE TOOLS TO DO IT, not having expensive hardware. So
long as some people on this list repeat this mistake, this list will
degenerate even further into
The argument itself is extremely rigorous: on all the occasions on which
someone has disputed the rigorousness of the argument, they have either
addressed some other issue entirely or they have just waved their hands
without showing any sign of understanding the argument, and then said ...
get into discussions with you. It's fun
sometimes, but.
Back to work.
Richard Loosemore
Ed Porter
-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Saturday, June 28, 2008 4:14 PM
To: agi@v2.listbox.com
Subject: Re: [agi] WHAT SORT OF HARDWARE
Ben Goertzel wrote:
The argument itself is extremely rigorous: on all the occasions on which
someone has disputed the rigorousness of the argument, they have either
addressed some other issue entirely or they have just waved their hands
without showing any sign of understanding the argument,
Richard,
I think that it would be possible to formalize your complex systems argument
mathematically, but I don't have time to do so right now.
Or, then again . perhaps I am wrong: maybe you really *cannot*
understand anything except math?
It's not the case that I can only understand
Richard, if the list is so dead of late, how come you have posted to it so
often recently?
-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Friday, June 27, 2008 4:30 PM
To: agi@v2.listbox.com
Subject: Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K
There was one little line in this post that struck me, and I wanted to
comment:
Quoting Ed Porter [EMAIL PROTECTED]:
With regard to performance, such systems are not even close to human brain
level but they should allow some interesting proofs of concepts
Mentioning some huge system. My
Ed Porter wrote:
I do not claim the software architecture for AGI has been totally solved.
But I believe that enough good AGI approaches exist (and I think Novamente
is one) that when powerful hardware available to more people we will be able
to relatively quickly get systems up and running that
read they would be indexed
and be much more rapidly available for future access when relevant.
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Saturday, June 28, 2008 3:36 PM
To: agi@v2.listbox.com
Subject: Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS
On Sat, Jun 28, 2008 at 4:13 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Ed Porter wrote:
I do not claim the software architecture for AGI has been totally solved.
But I believe that enough good AGI approaches exist (and I think Novamente
is one) that when powerful hardware available to
Richard and Ed,
Insanity is doing the same thing over and over again and expecting different
results. - Albert Einstein
Prelude to insanity: unintentionally doing the same thing over and over again
and getting the same results. - Me
Cheers,
Brad
Richard Loosemore wrote:
Ed Porter wrote:
Richard,
So long as the general response to the complex systems problem is not This
could be a serious issue, let's put our heads together to investigate it,
but My gut feeling is that this is just not going to be a problem, or
Quit rocking the boat!, you can bet that nobody really wants to
At a quick glance I would say you could do it cheaper by building it
yourself rather than buying Dell servers (cf MicroWulf project that was
discussed before: http://www.clustermonkey.net//content/view/211/33/).
Secondly: if what you need to get done is spreading activation (which
implies
80 matches
Mail list logo