2008/7/3 Terren Suydam <[EMAIL PROTECTED]>:
>
> --- On Wed, 7/2/08, William Pearson <[EMAIL PROTECTED]> wrote:
>> Evolution! I'm not saying your way can't work, just
>> saying why I short
>> cut where I do. Note a thing has a purpose if it is useful
>> to apply
>> the design stance* to it. There are two things to
>> differentiate
>> between, having a purpose and having some feedback of a
>> purpose built
>> in to the system.
>
> I don't believe evolution has a purpose. See Hod Lipson's TED talk for an 
> intriguing experiment in which replication is an inevitable outcome for a 
> system of building blocks explicitly set up in a random fashion. In other 
> words, purpose is emergent and ultimately in the mind of the beholder.
>
> See this article for an interesting take that increasing complexity is a 
> property of our laws of thermodynamics for non-equilibrium systems:
>
> http://biology.plosjournals.org/perlserv/?request=get-document&doi=10.1371/journal.pbio.0050142&ct=1
>
> In other words, Darwinian evolution is a special case of a more basic kind of 
> selection based on the laws of physics. This would deprive evolution of any 
> notion of purpose.
>

Evolution doesn't have a purpose, it creates things with purpose.
Where purpose means it is useful to apply the design stance on it,
e.g. ask what an eye on a frog is for.

>> It is the second I meant, I should have been more specific.
>> That is to
>> apply the intentional stance to something successfully, I
>> think a
>> sense of its own purpose is needed to be embedded in that
>> entity (this
>> may only be a very crude approximation to the purpose we
>> might assign
>> something looking from an evolution eye view).
>
> Specifying a system's goals is limiting in the sense that we don't force the 
> agent to construct its own goals based on it own constructions. In other 
> words, this is just a different way of creating an ontology. It narrows the 
> domain of applicability. That may be exactly what you want to do, but for AGI 
> researchers, it is a mistake.

Remember when I said that a purpose is not the same thing as a goal?
The purpose that the system might be said to have embedded is
attempting to maximise a certain signal. This purpose presupposes no
ontology. The fact that this signal is attached to a human means the
system as a whole might form the goal to try and please the human. Or
depending on what the human does it might develop other goals. Goals
are not the same as purposes. Goals require the intentional stance,
purposes the design.

>> Also your way we will end up with entities that may not be
>> useful to
>> us, which I think of as a negative for a long costly
>> research program.
>>
>>  Will
>
> Usefulness, again, is in the eye of the beholder. What appears not useful 
> today may be absolutely critical to an evolved descendant. This is a popular 
> explanation for how diversity emerges in nature, that a virus or bacteria 
> does some kind of horizontal transfer of its genes into a host genome, and 
> that gene becomes the basis for a future adaptation.
>
> When William Burroughs said language is a virus, he may have been more 
> correct than he knew. :-]
>


Possibly, but it will be another huge research topic to actually talk
to the things that evolve in the artificial universe, as they will
share very little background knowledge or ontology with us. I wish you
luck and will be interested to see where you go but the alife route is
just to slow and resource intensive for my liking.

  Will


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to