Steve,

I don't know why you are taking this opportunity to attack my own particular
approach to AGI, because that is **not** what this thread is about.

I am talking about -- hypothetically, I'm not at all sure it's a good idea,
I'm just raising the issue for discussion!! -- separating two different
**categories** of discussion:

1)
Specifics of attempts to engineer human-level AGI on current computers

2)
General discussion of the philosophy of AGI, and the in-principle viability
of engineering AGI on current computers

My own research is not the  only thing falling into Category 1.

And, as it happens, I have published a number of books and papers falling
into Category 2

So, I'm not trying to force my ideas on anyone nor suggesting to constrain
the discussion in line with my personal opinions -- I'm suggesting
potentially, maybe, that it might make more sense to have some way of
separating the two broad **categories** of discussion defined above as 1 and
2.

I am personally interested in both 1 and 2, but I'm interested in devoting
more of my time and attention to 1 rather than 2 at this stage in my life.

I keep trying to be VERY clear on these points yet people keep
misinterpreting me due to thinking I have some sort of hidden agenda.  It's
not the case.

-- Ben G

On Wed, Oct 15, 2008 at 3:26 PM, Steve Richfield
<[EMAIL PROTECTED]>wrote:

> Ben, et al,
>
> Those who have been in the computer biz for more than just a few years know
> for a moral certainty that the difference between successful and failed
> projects very often lies in the feasibility study. Further, most of the
> largest computer debacles in history had early objectors on feasibility
> grounds, and these people were ignored.
>
> Rubbing my own crystal ball (momentary pause for polishing), I think I see
> the future of AGI, and it goes something like this: Like so many other
> grossly under-funded efforts, the present efforts here will either fail, or
> be superseded by someone else's highly funded effort that borrows heavily
> from your work. My BIG concern is whether a failure here will poison other
> future efforts for decades to come, much as perceptrons and shallow parsing
> were poisoned.
>
> I believe that the following path that you are apparent on will
> be COMPLETELY disastrous, not only to your own efforts, but very likely to
> the entire future of AGI:
> 1.  Fail to advance any substantial argument of feasibility.
> 2.  Refuse to directly address various challenges on feasibility grounds
> advanced by others.
> 3.  Completely cut off all feasibility discussion.
> 4.  Fail for any of the countless reasons that have been discussed here on
> this forum, not to mention personal limitations (time, money, health, etc).
>
> Note here that it is VERY important that if you fail, that the failure NOT
> be directly attributable to AGI, but rather be to flaws in your particular
> approach. Hiding these flaws only dooms the future of AGI. The present
> format lays these bare and presents no such problems.
>
> If you do indeed cement this questionable path, AGI's only apparent
> long-term hope for success is that you fall into obscurity and are
> completely forgotten, not that I necessarily think that this will happen.
>
> Hopefully you can see that it is in no one's best interest to effectively
> present the world with a choice between you and AGI, which the decision you
> are now considering could do.
>
> Also, addressing Terren Suydam's comments, no potential investor would EVER
> give anyone a dime, who had cut off feasibility discussion. Such a decision
> will forever cut you off from future investment money, probably for
> everything that you will ever do, and hence doom your efforts to obscurity
> no matter how great your technical success might be.
>
> But, what the heck, these are all just feasibility arguments, and you want
> to cut these off.
>
> May I suggest that you ask people to put something like [agi feasibility]
> in their subject lines and allow things to otherwise continue as they are.
> Then, when you fail, it won't poison other AGI efforts. Perhaps Matt or
> someone would like to separately monitor those postings.
>
> Steve Richfield
> ===============
> On 10/15/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
>>
>> Hi all,
>>
>> I have been thinking a bit about the nature of conversations on this list.
>>
>> It seems to me there are two types of conversations here:
>>
>> 1)
>> Discussions of how to design or engineer AGI systems, using current
>> computers, according to designs that can feasibly be implemented by
>> moderately-sized groups of people
>>
>> 2)
>> Discussions about whether the above is even possible -- or whether it is
>> impossible because of weird physics, or poorly-defined special
>> characteristics of human creativity, or the so-called "complex systems
>> problem", or because AGI intrinsically requires billions of people and
>> quadrillions of dollars, or whatever
>>
>> Personally I am pretty bored with all the conversations of type 2.
>>
>> It's not that I consider them useless discussions in a grand sense ...
>> certainly, they are valid topics for intellectual inquiry.
>>
>> But, to do anything real, you have to make **some** decisions about what
>> approach to take, and I've decided long ago to take an approach of trying to
>> engineer an AGI system.
>>
>> Now, if someone had a solid argument as to why engineering an AGI system
>> is impossible, that would be important.  But that never seems to be the
>> case.  Rather, what we hear are long discussions of peoples' intuitions and
>> opinions in this regard.  People are welcome to their own intuitions and
>> opinions, but I get really bored scanning through all these intuitions about
>> why AGI is impossible.
>>
>> One possibility would be to more narrowly focus this list, specifically on
>> **how to make AGI work**.
>>
>> If this re-focusing were done, then philosophical arguments about the
>> impossibility of engineering AGI in the near term would be judged **off
>> topic** by definition of the list purpose.
>>
>> Potentially, there could be another list, something like "agi-philosophy",
>> devoted to philosophical and weird-physics and other discussions about
>> whether AGI is possible or not.  I am not sure whether I feel like running
>> that other list ... and even if I ran it, I might not bother to read it very
>> often.  I'm interested in new, substantial ideas related to the in-principle
>> possibility of AGI, but not interested at all in endless philosophical
>> arguments over various peoples' intuitions in this regard.
>>
>> One fear I have is that people who are actually interested in building
>> AGI, could be scared away from this list because of the large volume of
>> anti-AGI philosophical discussion.   Which, I add, almost never has any new
>> content, and mainly just repeats well-known anti-AGI arguments (Penrose-like
>> physics arguments ... "mind is too complex to engineer, it has to be
>> evolved" ... "no one has built an AGI yet therefore it will never be done"
>> ... etc.)
>>
>> What are your thoughts on this?
>>
>> -- Ben
>>
>>
>>
>>
>> On Wed, Oct 15, 2008 at 10:49 AM, Jim Bromer <[EMAIL PROTECTED]> wrote:
>>
>>> On Wed, Oct 15, 2008 at 10:14 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>>> >
>>> > Actually, I think COMP=false is a perfectly valid subject for
>>> discussion on
>>> > this list.
>>> >
>>> > However, I don't think discussions of the form "I have all the answers,
>>> but
>>> > they're top-secret and I'm not telling you, hahaha" are particularly
>>> useful.
>>> >
>>> > So, speaking as a list participant, it seems to me this thread has
>>> probably
>>> > met its natural end, with this reference to proprietary weird-physics
>>> IP.
>>> >
>>> > However, speaking as list moderator, I don't find this thread so
>>> off-topic
>>> > or unpleasant as to formally kill the thread.
>>> >
>>> > -- Ben
>>>
>>>
>>> If someone doesn't want to get into a conversation with Colin about
>>> whatever it is that he is saying, then they should just exercise some
>>> self-control and refrain from doing so.
>>>
>>> I think Colin's ideas are pretty far out there. But that does not mean
>>> that he has never said anything that might be useful.
>>>
>>> My offbeat topic, that I believe that the Lord may have given me some
>>> direction about a novel approach to logical satisfiability that I am
>>> working on, but I don't want to discuss the details about the
>>> algorithms until I have gotten a chance to see if they work or not,
>>> was never intended to be a discussion about the theory itself.  I
>>> wanted to have a discussion about whether or not a good SAT solution
>>> would have a significant influence on AGI, and whether or not the
>>> unlikely discovery of an unexpected breakthrough on SAT would serve as
>>> rational evidence in support of the theory that the Lord helped me
>>> with the theory.
>>>
>>> Although I am skeptical about what I think Colin is claiming, there is
>>> an obvious parallel between his case and mine.  There are relevant
>>> issues which he wants to discuss even though his central claim seems
>>> to private, and these relevant issues may be interesting.
>>>
>>> Colin's unusual reference to some solid path which cannot be yet
>>> discussed is annoying partly because it so obviously unfounded.  If he
>>> had the proof (or a method), then why isn't he writing it up (or
>>> working it out).  A similar argument was made against me by the way,
>>> but the difference was that I never said that I had the proof or
>>> method.  (I did say that you should get used to a polynomial time
>>> solution to SAT but I never said that I had a working algorithm.)
>>>
>>> My point is that even though people may annoy you with what seems like
>>> unsubstantiated claims, that does not disqualify everything they have
>>> said. That rule could so easily be applied to anyone who posts on that
>>> list.
>>>
>>> Jim Bromer
>>>
>>>
>>> -------------------------------------------
>>> agi
>>> Archives: https://www.listbox.com/member/archive/303/=now
>>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>> Powered by Listbox: http://www.listbox.com
>>>
>>>
>>
>>
>>
>> --
>> Ben Goertzel, PhD
>> CEO, Novamente LLC and Biomind LLC
>> Director of Research, SIAI
>> [EMAIL PROTECTED]
>>
>> "Nothing will ever be attempted if all possible objections must be first
>> overcome "  - Dr Samuel Johnson
>>
>>
>>
>>  ------------------------------
>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com/>
>>
>
>  ------------------------------
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to