Steve Richfield said:
>May I suggest that you ask people to put something like [agi feasibility] in 
>their subject
>lines and allow things to otherwise continue as they are. Then, when you fail, 
>it won't
>poison other AGI efforts.

This is a strange and quite profoundly disheartening statement. What
would compel you to use such a tone?

For my part I'd like to see less trolls fed, and more bugs squished

On 10/15/08, Eric Burton <[EMAIL PROTECTED]> wrote:
> I also agree with Vladimir, mailing list format is more convenient and more
> fun.
>
> On 10/15/08, Eric Burton <[EMAIL PROTECTED]> wrote:
>> I also agree the list should focus on specific approaches and not on
>> hifalutin denials of achievability. I don't know why non-human,
>> specifically electronic intelligence is such a hot button issue for
>> some folks. It's like they'd be happier if it never happened. But why?
>>
>> On 10/15/08, Terren Suydam <[EMAIL PROTECTED]> wrote:
>>>
>>> This is a publicly accessible forum with searchable archives... you don't
>>> necessarily have to be subscribed and inundated to find those nuggets. I
>>> don't know any funding decision makers myself, but if I were in control
>>> of
>>> a
>>> budget I'd be using every resource at my disposal to clarify my decision.
>>> If
>>> I were considering Novamente for example I'd be looking for exactly the
>>> kind
>>> of exchanges you and Richard Loosemore (for example) have had on the
>>> list,
>>> to gain a better understanding of possible criticism, and because others
>>> may
>>> be able to articulate such criticism far better than me.  Obviously the
>>> same
>>> goes for anyone else on the list who would look for funding... I'd want
>>> to
>>> see you defend your ideas, especially in the absence of peer-reviewed
>>> journals (something the JAGI hopes to remedy obv).
>>>
>>> Terren
>>>
>>> --- On Wed, 10/15/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>>> From: Ben Goertzel <[EMAIL PROTECTED]>
>>> Subject: Re: [agi] META: A possible re-focusing of this list
>>> To: agi@v2.listbox.com
>>> Date: Wednesday, October 15, 2008, 3:37 PM
>>>
>>>
>>> Terren,
>>>
>>> I know a good number of VC's and government and private funding decision
>>> makers... and believe me, **none** of them has remotely enough extra time
>>> to
>>> wade through the amount of text that flows on this list, to find the
>>> nuggets
>>> of real intellectual interest!!!
>>>
>>>
>>> -- Ben G
>>>
>>> On Wed, Oct 15, 2008 at 12:07 PM, Terren Suydam <[EMAIL PROTECTED]>
>>> wrote:
>>>
>>>
>>>
>>> One other important point... if I were a potential venture capitalist or
>>> some other sort of funding decision-maker, I would be on this list and
>>> watching the debate. I'd be looking for intelligent defense of
>>> (hopefully)
>>> intelligent criticism to increase my confidence about the decision to
>>> fund.
>>> This kind of forum also allows you to sort of advertise your approach to
>>> those who are new to the game, particularly young folks who might one day
>>> be
>>> valuable contributors, although I suppose that's possible in the more
>>> tightly-focused forum as well.
>>>
>>>
>>> --- On Wed, 10/15/08, Terren Suydam <[EMAIL PROTECTED]> wrote:
>>>
>>> From: Terren Suydam <[EMAIL PROTECTED]>
>>> Subject: Re: [agi] META: A possible re-focusing of this list
>>> To:
>>>  agi@v2.listbox.com
>>> Date: Wednesday, October 15, 2008, 11:29 AM
>>>
>>>
>>>
>>> Hi Ben,
>>>
>>>
>>> I think that the current focus has its pros and cons and the more
>>> narrowed
>>> focus you suggest would have *its* pros and cons. As you said, the con of
>>> the current focus is the boring repetition of various anti positions. But
>>> the pro of allowing that stuff is for those of us who use the conflict
>>> among
>>> competing viewpoints to clarify our own positions and gain insight. Since
>>> you seem to be fairly clear about your own viewpoint, it is for you a
>>> situation of diminishing returns (although I will point out that a recent
>>> blog post of yours on the subject of play was inspired, I think, by
>>>  a point Mike Tintner made, who is probably the most obvious target of
>>> your
>>> frustration).
>>>
>>> For myself, I have found tremendous value here in the debate (which
>>> probably
>>> says a lot about the crudeness of my philosophy). I have had many new
>>> insights and discovered
>>>  some false assumptions. If you narrowed the focus, I would probably
>>> leave
>>> (I am not offering that as a reason not to do it! :-)  I would be
>>> disappointed, but I would understand if that's the decision you made.
>>>
>>>
>>> Finally, although there hasn't been much novelty among the debate (from
>>> your
>>> perspective, anyway), there is always the possibility that there will be.
>>> This seems to be the only public forum for AGI discussion out there (are
>>> there others, anyone?), so presumably there's a good chance it would show
>>> up
>>> here, and that is good for you and others actively involved in AGI
>>> research.
>>>
>>>
>>> Best,
>>> Terren
>>>
>>>
>>> --- On Wed, 10/15/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>>>
>>> From: Ben Goertzel <[EMAIL PROTECTED]>
>>> Subject: [agi] META: A possible re-focusing of this list
>>> To: agi@v2.listbox.com
>>>
>>> Date: Wednesday,
>>>  October 15, 2008, 11:01 AM
>>>
>>>
>>> Hi all,
>>>
>>> I have been thinking a bit about the nature of conversations on this
>>> list.
>>>
>>> It seems to me there are two types of conversations here:
>>>
>>>
>>> 1)
>>> Discussions of how to design or engineer AGI systems, using current
>>> computers, according to designs that can feasibly be implemented by
>>> moderately-sized groups of people
>>>
>>>
>>> 2)
>>> Discussions about whether the above is even possible -- or whether it is
>>> impossible because of weird physics, or poorly-defined special
>>> characteristics of human creativity, or the so-called "complex systems
>>> problem", or because AGI intrinsically requires billions of people and
>>> quadrillions of dollars, or whatever
>>>
>>>
>>>
>>> Personally I am pretty bored with all the conversations of type 2.
>>>
>>> It's not that I consider them useless discussions in a grand sense ...
>>> certainly, they are valid topics for intellectual inquiry.
>>>
>>>
>>>
>>> But, to do anything real, you have to make **some** decisions about what
>>> approach to take, and I've decided long ago to take an approach of trying
>>> to
>>> engineer an AGI system.
>>>
>>> Now, if someone had a solid argument as to why engineering an AGI system
>>> is
>>> impossible, that would be important.  But that never seems to be the
>>> case.
>>> Rather, what we hear are long discussions of peoples' intuitions and
>>> opinions in this regard.  People are welcome to their own intuitions and
>>> opinions, but I get really bored scanning through all these intuitions
>>> about
>>> why AGI is impossible.
>>>
>>>
>>>
>>> One possibility would be to more narrowly focus this list, specifically
>>> on
>>> **how to make AGI work**.
>>>
>>> If this re-focusing were done, then philosophical arguments about the
>>> impossibility of engineering AGI in the near term would be judged **off
>>> topic** by definition of the list purpose.
>>>
>>>
>>>
>>> Potentially, there could be another list, something like
>>> "agi-philosophy",
>>> devoted to philosophical and weird-physics and other discussions about
>>> whether AGI is possible or not.  I am not sure whether I feel like
>>> running
>>> that other list ... and even if I ran it, I might not bother to read it
>>> very
>>> often.  I'm interested in new, substantial ideas related to the
>>> in-principle
>>> possibility of AGI, but not interested at all in endless philosophical
>>> arguments over various peoples' intuitions in this regard.
>>>
>>>
>>>
>>> One fear I have is that people who are actually interested in building
>>> AGI,
>>> could be scared away from this list because of the large volume of
>>> anti-AGI
>>> philosophical discussion.   Which, I add, almost never has any new
>>> content,
>>> and mainly just repeats well-known anti-AGI arguments (Penrose-like
>>> physics
>>> arguments ... "mind is too complex to engineer, it has to be evolved" ...
>>> "no one has built an AGI yet therefore it will never be done" ... etc.)
>>>
>>>
>>>
>>> What are your thoughts on this?
>>>
>>> -- Ben
>>>
>>>
>>>
>>>
>>> On Wed, Oct 15, 2008 at 10:49 AM, Jim Bromer <[EMAIL PROTECTED]> wrote:
>>>
>>>
>>> On Wed, Oct 15, 2008 at 10:14 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>>>
>>>
>>>
>>>>
>>>
>>>> Actually, I think COMP=false is a perfectly valid subject for discussion
>>>> on
>>>
>>>> this list.
>>>
>>>>
>>>
>>>> However, I don't think discussions of the form "I have all the answers,
>>>> but
>>>
>>>> they're top-secret and I'm not telling you, hahaha" are particularly
>>>> useful.
>>>
>>>>
>>>
>>>> So, speaking as a list participant, it seems to me this thread has
>>>> probably
>>>
>>>> met its natural end, with this reference to proprietary weird-physics
>>>> IP.
>>>
>>>>
>>>
>>>> However, speaking as list moderator, I don't find this thread so
>>>> off-topic
>>>
>>>> or unpleasant as to formally kill the thread.
>>>
>>>>
>>>
>>>> -- Ben
>>>
>>>
>>>
>>> If someone doesn't want to get into a conversation with Colin about
>>>
>>> whatever it is that he is saying, then they should just exercise some
>>>
>>> self-control and refrain from doing so.
>>>
>>>
>>>
>>> I think Colin's ideas are pretty far out there. But that does not mean
>>>
>>> that he has never said anything that might be useful.
>>>
>>>
>>>
>>> My offbeat topic, that I believe that the Lord may have given me some
>>>
>>> direction about a novel approach to logical satisfiability that I am
>>>
>>> working on, but I don't want to discuss the details about the
>>>
>>> algorithms until I have gotten a chance to see if they work or not,
>>>
>>> was never intended to be a discussion about the theory itself.  I
>>>
>>> wanted to have a discussion about whether or not a good SAT solution
>>>
>>> would have a significant influence on AGI, and whether or not the
>>>
>>> unlikely discovery of an unexpected breakthrough on SAT would serve as
>>>
>>> rational evidence in support of the theory that the Lord helped me
>>>
>>> with the theory.
>>>
>>>
>>>
>>> Although I am skeptical about what I think Colin is claiming, there is
>>>
>>> an obvious parallel between his case and mine.  There are relevant
>>>
>>> issues which he wants to discuss even though his central claim seems
>>>
>>> to private, and these relevant issues may be interesting.
>>>
>>>
>>>
>>> Colin's unusual reference to some solid path which cannot be yet
>>>
>>> discussed is annoying partly because it so obviously unfounded.  If he
>>>
>>> had the proof (or a method), then why isn't he writing it up (or
>>>
>>> working it out).  A similar argument was made against me by the way,
>>>
>>> but the difference was that I never said that I had the proof or
>>>
>>> method.  (I did say that you should get used to a polynomial time
>>>
>>> solution to SAT but I never said that I had a working algorithm.)
>>>
>>>
>>>
>>> My point is that even though people may annoy you with what seems like
>>>
>>> unsubstantiated claims, that does not disqualify everything they have
>>>
>>> said. That rule could so easily be applied to anyone who posts on that
>>>
>>> list.
>>>
>>>
>>>
>>> Jim Bromer
>>>
>>>
>>>
>>>
>>>
>>> -------------------------------------------
>>>
>>> agi
>>>
>>> Archives: https://www.listbox.com/member/archive/303/=now
>>>
>>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>>>
>>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>>
>>> Powered by Listbox: http://www.listbox.com
>>>
>>>
>>>
>>>
>>> --
>>> Ben Goertzel, PhD
>>> CEO, Novamente LLC and Biomind LLC
>>> Director of Research, SIAI
>>> [EMAIL PROTECTED]
>>>
>>>
>>> "Nothing will ever be attempted if all possible objections must be first
>>> overcome "  - Dr Samuel Johnson
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>       agi | Archives
>>>
>>>  | Modify
>>>  Your Subscription
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>       agi | Archives
>>>
>>>  | Modify
>>>  Your Subscription
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>       agi | Archives
>>>
>>>  | Modify
>>>  Your Subscription
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Ben Goertzel, PhD
>>> CEO, Novamente LLC and Biomind LLC
>>> Director of Research, SIAI
>>> [EMAIL PROTECTED]
>>>
>>> "Nothing will ever be attempted if all possible objections must be first
>>> overcome "  - Dr Samuel Johnson
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>       agi | Archives
>>>
>>>  | Modify
>>>  Your Subscription
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> -------------------------------------------
>>> agi
>>> Archives: https://www.listbox.com/member/archive/303/=now
>>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>>> Modify Your Subscription:
>>> https://www.listbox.com/member/?&;
>>> Powered by Listbox: http://www.listbox.com
>>>
>>
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to