Russell, Your post is an excellent foil to respond to...
On Sun, Jul 8, 2012 at 8:25 AM, Russell Wallace <[email protected]>wrote: > I agree completely. For example, some months ago I set out my reasons for > thinking it won't be productive to try to copy the architecture of the > human brain short of full uploading. I missed your reasons, and am also only slightly interested in uploading. There is a vast chasm between "ignoring" and "copying". Perhaps the most important point in between these two is "understanding", which is subdivided between "generally understanding the principles" (which I don't see that we are anywhere close to yet) and "having access to any desired details" (which may be easier or harder than general understanding, depending on both the physical capabilities of future equipment and the mental capabilities of future researchers). I haven't repeated my view on that since then, Perhaps you could "recast" your argument to address doing at least enough human brain research ala reverse engineering to extract the general principles, and developing the capability to "drill down" into any details for which a general understanding proves inadequate? because I don't have any new arguments or evidence bearing on the matter, > and repeating the same arguments would just annoy people. It seems pretty clear to me that it would be IMPOSSIBLE to ever make a "copy" (with all the unavoidable problems that likely copying equipment would have), without an understanding to guide the debugging process. Since the AGI people here only want the understanding, and copying (and therefore uploading) also requires an understanding, getting an understanding would seem to me to be the obvious next step, ESPECIALLY if (as I suspect) a modest amount of developmental work could produce tools to greatly expedite getting that understanding. I think that's a good criterion: is this a new argument, or just a repeat > of an old one? If the latter, it probably doesn't need to be repeated to > the same audience. > To avoid lost arguments (apparently like Russell's above), I think it still OK to reference past arguments, especially if a hyperlink is provided for a quick re-read. I often click on the hyperlinks that people provide. Steve =================== > On Sun, Jul 8, 2012 at 4:15 PM, Ben Goertzel <[email protected]> wrote: > >> >> In general, I think it would be good if subgroups of people sharing >> certain AI intuitions could carry out a discussion on this list, with >> others listening in and contributing occasionally, but with others NOT >> repetitively chiming into the discussion with comments of the basic meaning >> "By the way, I told you guys 100 times before that your paradigm sucks, so >> why do you keep on pursuing it?!" >> >> For example, I would be happy to listen in on others' discussions on >> analog computing approaches to AGI, making technical comments or asking >> technical questions occasionally; and I would not feel the need to >> interrupt these discussions repeatedly with comments of the form "Why don't >> you guys adopt my preferred AGI paradigm instead!!" >> >> This is almost making me feel motivated to create a set of posting >> guidelines for the list ;p .. but, not quite... >> >> -- Ben G >> >> On Sun, Jul 8, 2012 at 10:51 PM, Russell Wallace < >> [email protected]> wrote: >> >>> On Sat, Jul 7, 2012 at 12:11 AM, Steve Richfield < >>> [email protected]> wrote: >>> >>>> OK, perhaps we should just stay here and distinguish "weak AGI" where >>>> people attempt to somehow leverage data point computation into an >>>> intelligent process as now seems to be the norm on this forum, and "strong >>>> AGI" where we attempt to move up to whatever metalevel is at least as high >>>> as our brains operate on, and which can also conceivably be performed by >>>> plausibly manufacturable hardware, albeit not anything like present CPUs. >>>> >>>> Any problem with those terms? >>>> >>> >>> Yes, 'strong AI' already has an established meaning, denoting the aim of >>> producing a fully human level mind (by whatever method), as opposed to >>> 'weak AI' which merely aims to make computers smarter and more useful than >>> they currently are. >>> >>> Besides, you don't exactly need a PhD in psychology to figure out that >>> many people will object to the word 'weak' being applied to their line of >>> research! Personally I don't care about that so much as about the fact that >>> your proposed usage is highly uninformative. >>> >>> Until you get enough like-minded people to start a separate mailing >>> list, I would recommend coming up with a more descriptive term for your >>> proposed line of research. >>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >>> <https://www.listbox.com/member/archive/rss/303/212726-11ac2389> | >>> Modify <https://www.listbox.com/member/?&> Your Subscription >>> <http://www.listbox.com> >>> >> >> >> >> -- >> Ben Goertzel, PhD >> http://goertzel.org >> >> "My humanity is a constant self-overcoming" -- Friedrich Nietzsche >> >> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >> <https://www.listbox.com/member/archive/rss/303/1658954-f53d1a3f> | >> Modify <https://www.listbox.com/member/?&> Your Subscription >> <http://www.listbox.com> >> > > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > -- Full employment can be had with the stoke of a pen. Simply institute a six hour workday. That will easily create enough new jobs to bring back full employment. ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968 Powered by Listbox: http://www.listbox.com
