I agree completely. For example, some months ago I set out my reasons for
thinking it won't be productive to try to copy the architecture of the
human brain short of full uploading. I haven't repeated my view on that
since then, because I don't have any new arguments or evidence bearing on
the matter, and repeating the same arguments would just annoy people. I
think that's a good criterion: is this a new argument, or just a repeat of
an old one? If the latter, it probably doesn't need to be repeated to the
same audience.

On Sun, Jul 8, 2012 at 4:15 PM, Ben Goertzel <[email protected]> wrote:

>
> In general, I think it would be good if subgroups of people sharing
> certain AI intuitions could carry out a discussion on this list, with
> others listening in and contributing occasionally, but with others NOT
> repetitively chiming into the discussion with comments of the basic meaning
> "By the way, I told you guys 100 times before that your paradigm sucks, so
> why do you keep on pursuing it?!"
>
> For example, I would be happy to listen in on others' discussions on
> analog computing approaches to AGI, making technical comments or asking
> technical questions occasionally; and I would not feel the need to
> interrupt these discussions repeatedly with comments of the form "Why don't
> you guys adopt my preferred AGI paradigm instead!!"
>
> This is almost making me feel motivated to create a set of posting
> guidelines for the list ;p .. but, not quite...
>
> -- Ben G
>
> On Sun, Jul 8, 2012 at 10:51 PM, Russell Wallace <
> [email protected]> wrote:
>
>> On Sat, Jul 7, 2012 at 12:11 AM, Steve Richfield <
>> [email protected]> wrote:
>>
>>> OK, perhaps we should just stay here and distinguish "weak AGI" where
>>> people attempt to somehow leverage data point computation into an
>>> intelligent process as now seems to be the norm on this forum, and "strong
>>> AGI" where we attempt to move up to whatever metalevel is at least as high
>>> as our brains operate on, and which can also conceivably be performed by
>>> plausibly manufacturable hardware, albeit not anything like present CPUs.
>>>
>>> Any problem with those terms?
>>>
>>
>> Yes, 'strong AI' already has an established meaning, denoting the aim of
>> producing a fully human level mind (by whatever method), as opposed to
>> 'weak AI' which merely aims to make computers smarter and more useful than
>> they currently are.
>>
>> Besides, you don't exactly need a PhD in psychology to figure out that
>> many people will object to the word 'weak' being applied to their line of
>> research! Personally I don't care about that so much as about the fact that
>> your proposed usage is highly uninformative.
>>
>> Until you get enough like-minded people to start a separate mailing list,
>> I would recommend coming up with a more descriptive term for your proposed
>> line of research.
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/212726-11ac2389> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com>
>>
>
>
>
> --
> Ben Goertzel, PhD
> http://goertzel.org
>
> "My humanity is a constant self-overcoming" -- Friedrich Nietzsche
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/1658954-f53d1a3f> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to