Mark Waser writes:
 
> BTW, with this definition of morality, I would argue that it is a very rare 
> human that makes moral decisions any appreciable percent of the time 
 
Just a gentle suggestion:  If you're planning to unveil a major AGI initiative 
next month, focus on that at the moment.  This stuff you have been arguing 
lately is quite peripheral to what you have in mind, except perhaps for the 
business model but in that area I see little compromise on more than subtle 
technical points.
 
As I have begun to re-attach myself to the issues of "AGI" I have become 
suspicious of the ability or wisdom of attaching important semantics to atomic 
tokens (as I suspect you are going to attempt to do, along with most 
approaches), but I'd dearly like to contribute to something I thought had a 
chance.
This stuff, though, belongs on comp.ai.philosophy (which is to say, it belongs 
unread).

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to