There can be many reasons why a blog participant may need an argument repeated. 
Maybe he wasn't there, or was not paying attention at the time the argument was 
posted. Or he doesn't not belive in the argument, or simple prefers to ignore 
it because it is in the way of his own argument. That's why it may be useful to 
have a permanent repository where arguments that are solid enough are kept for 
reference, something like a forum. I think Ben did something in this direction 
2 or 3 months ago. 

 

But a forum can be easily misused. One alternative would be a repository where 
each participant would be allowed to post just one article on his/her ideas, 
and will then be the curator for that article as needed. The blog would 
continue serving as a tool for open discussion. 

 

Sergio

 

 

 

From: Russell Wallace [mailto:[email protected]] 
Sent: Sunday, July 08, 2012 10:26 AM
To: AGI
Subject: Re: [agi] re Computing functions versus solving equations (calculating 
versus physical execution)

 

I agree completely. For example, some months ago I set out my reasons for 
thinking it won't be productive to try to copy the architecture of the human 
brain short of full uploading. I haven't repeated my view on that since then, 
because I don't have any new arguments or evidence bearing on the matter, and 
repeating the same arguments would just annoy people. I think that's a good 
criterion: is this a new argument, or just a repeat of an old one? If the 
latter, it probably doesn't need to be repeated to the same audience.

On Sun, Jul 8, 2012 at 4:15 PM, Ben Goertzel <[email protected]> wrote:


In general, I think it would be good if subgroups of people sharing certain AI 
intuitions could carry out a discussion on this list, with others listening in 
and contributing occasionally, but with others NOT repetitively chiming into 
the discussion with comments of the basic meaning "By the way, I told you guys 
100 times before that your paradigm sucks, so why do you keep on pursuing it?!"

For example, I would be happy to listen in on others' discussions on analog 
computing approaches to AGI, making technical comments or asking technical 
questions occasionally; and I would not feel the need to interrupt these 
discussions repeatedly with comments of the form "Why don't you guys adopt my 
preferred AGI paradigm instead!!"

This is almost making me feel motivated to create a set of posting guidelines 
for the list ;p .. but, not quite...

-- Ben G

On Sun, Jul 8, 2012 at 10:51 PM, Russell Wallace <[email protected]> 
wrote:

On Sat, Jul 7, 2012 at 12:11 AM, Steve Richfield <[email protected]> 
wrote:

OK, perhaps we should just stay here and distinguish "weak AGI" where people 
attempt to somehow leverage data point computation into an intelligent process 
as now seems to be the norm on this forum, and "strong AGI" where we attempt to 
move up to whatever metalevel is at least as high as our brains operate on, and 
which can also conceivably be performed by plausibly manufacturable hardware, 
albeit not anything like present CPUs.

Any problem with those terms?

 

Yes, 'strong AI' already has an established meaning, denoting the aim of 
producing a fully human level mind (by whatever method), as opposed to 'weak 
AI' which merely aims to make computers smarter and more useful than they 
currently are.

 

Besides, you don't exactly need a PhD in psychology to figure out that many 
people will object to the word 'weak' being applied to their line of research! 
Personally I don't care about that so much as about the fact that your proposed 
usage is highly uninformative.

 

Until you get enough like-minded people to start a separate mailing list, I 
would recommend coming up with a more descriptive term for your proposed line 
of research.


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives  
<https://www.listbox.com/member/archive/rss/303/212726-11ac2389> | Modify Your 
Subscription

 <https://www.listbox.com/member/archive/rss/303/212726-11ac2389> 

 <https://www.listbox.com/member/archive/rss/303/212726-11ac2389> 


-- 
Ben Goertzel, PhD
http://goertzel.org

"My humanity is a constant self-overcoming" -- Friedrich Nietzsche


 <https://www.listbox.com/member/archive/rss/303/212726-11ac2389> AGI | 
Archives | Modify Your Subscription

 <https://www.listbox.com/member/archive/rss/303/212726-11ac2389> 

 <https://www.listbox.com/member/archive/rss/303/212726-11ac2389>  


 <https://www.listbox.com/member/archive/rss/303/212726-11ac2389> AGI | 
Archives | Modify Your Subscription

 <https://www.listbox.com/member/archive/rss/303/212726-11ac2389> 

 <https://www.listbox.com/member/archive/rss/303/212726-11ac2389>  




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to