My two cents.  FWIW: Anyone who seriously doubts whether AGI is possible will 
never contribute anything of value to those who wish to build an AGI. Anyone 
wishing to build an AGI should stop wasting time reading such literature 
including postings (let alone replying to them). This is not advocating blind 
or unscientific dogma, sometimes you just have to make a choice in belief 
systems and no one achieved anything of greatness or even just significance by 
listening to those who say it can't be done. Although reading the various 
philosophical arguments against AI was a useful step in my AGI education, I 
went through that phase using books and internet articles. Several times I was 
on the verge of unsubscribing from the list because of those discussions (and 
all of the ego-maniacal mudslinging, flamewars and troll-postings) - I agree 
fully with Harry. I want to see new ideas, experiences on what worked and didnt 
work, who's working on what approaches, suggestions for ways forward, 
references to new resources or tools etc. So when e.g. Ben 'criticises' Richard 
Loosemore's model, I'm highly interested (because Richard's way of thinking is 
in some aspects much closer to mine than Ben's approach), when Richard replies 
emotionally, I just skip his reply but when he puts forward a rational argument 
it is extremely interesting to me. So I vote to stop all philosophical 
arguments on the possibility of AGI on this list, even though it is a 
necessary, or better, crucial part of any AGIer's development stage... 
incidentally: storing any AI reading in my AI philosophy folder is typically 
equivalent to utter condemnation, despite the fact that philosophy is one of my 
greatest interests.
Note that you should discount my posting somewhat due to the fact that I 
haven't posting anything for quite a while but that's because I am rather 
focussing my little time on building a first generation prototype.
 
= Jean-Paul
>>> On 2008/10/15 at 18:12, Harry Chesley <[EMAIL PROTECTED]> wrote:
On 10/15/2008 8:01 AM, Ben Goertzel wrote:
>  What are your thoughts on this?

A narrower focus of the list would be better for me personally.

I've been convinced for a long time that computer-based AGI is possible, 
and am working toward it. As such, I'm no longer interested in arguments 
about whether it is feasible or not. I skip over those postings in the list.

I also skip over postings which are about a pet theory rather than a 
true reply to the original post. They tend to have the form "your idea x 
will not work because it is in opposition to my theory y, which states 
<insert complex description here>." Certainly ones own ideas and 
theories should contribute to a reply, but they should not /be/ the reply.

And the last category that I skip are discussions that have gone far 
into an area that I don't consider relevant to my own line of inquiry. 
But I think those are valuable contributions to the list, just not of 
immediate interest to me. Like a typical programmer, I tend to 
over-focus on what I'm working on. But what I find irrelevant may be 
spot on for someone else, or for me at some other time.



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now 
RSS Feed: https://www.listbox.com/member/archive/rss/303/ 
Modify Your Subscription: https://www.listbox.com/member/?&; 
Powered by Listbox: http://www.listbox.com


 
______________________________________________________________________________________________
 

UNIVERSITY OF CAPE TOWN 

This e-mail is subject to the UCT ICT policies and e-mail disclaimer published 
on our website at http://www.uct.ac.za/about/policies/emaildisclaimer/ or 
obtainable from +27 21 650 4500. This e-mail is intended only for the person(s) 
to whom it is addressed. If the e-mail has reached you in error, please notify 
the author. If you are not the intended recipient of the e-mail you may not 
use, disclose, copy, redirect or print the content. If this e-mail is not 
related to the business of UCT it is sent by the sender in the sender's 
individual capacity.

_____________________________________________________________________________________________________
 



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to