This is basically the suggestion to move to a forum-type format instead of a
mailing list....  It has its plusses and minuses... you've cited one of the
plusses.

ben

On Tue, Oct 21, 2008 at 2:46 PM, Steve Richfield
<[EMAIL PROTECTED]>wrote:

> Ben,
>
> Hey, maybe I FINALLY got your "frame of mind" here. Just to test this,
> consider:
>
> Suppose we change the format NOT to exclude anything at all, but rather
> I/you/we set up a Wiki that includes EVERYTHING. Right next to a technical
> details may be a link to a philosophical point, and right next to a
> philosophical point may be a link to a technical detail. Then, on this
> forum, people would only post pointers to new edits and information that
> they EXPECT would disappear into the bit bucket by tomorrow.
>
> We would include identified "buzz phrases" to be able to pull important but
> disjoint things together, as I have been using the buzz phrase "Ben's list"
> with my various distilled "philosophical" (read that "feasibility") points.
>
> This way, everything ever related to a given subject would be pulled
> together and organized. I would be happier because the feasibility issues
> would all be together for anyone entering AGI to consider, and you would be
> happier because your technical section would be undisturbed by
> "philosophical" discussion, except for a few hyperlinks sprinkled therein.
>
> Does this work for everyone?
>
> Steve Richfield
> =================
> On 10/20/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
>>
>> Just to clarify one point: I am not opposed to philosophy, nor do I
>> consider it irrelevant to AGI.  I wrote a book on my own philosophy of mind
>> in 2006.
>>
>> I just feel like the philosophical discussions tend to overwhelm the
>> pragmatic discussions on this list, and that a greater number of pragmatic
>> discussions **might** emerge if the pragmatic and philosophical discussions
>> were carried out in separate venues.
>>
>> Some of us feel we already have adequate philosophical understanding to
>> design and engineer AGI systems.  We may be wrong, but that doesn't mean we
>> should spend our time debating our philosophical understandings, to the
>> exclusion of discussing the details of our concrete AGI work.
>>
>> For me, after enough discussion of the same philosophical issue, I stop
>> learning anything.  Most of the philosophical discussions on this list are
>> nearly identical in content to discussions I had with others 20 years ago.
>> I learned a lot from the discussions then, and learn a lot less from the
>> repeats...
>>
>> -- Ben
>>
>>
>>  On Mon, Oct 20, 2008 at 9:06 AM, Mike Tintner <[EMAIL PROTECTED]>wrote:
>>
>>> Vlad:Good philosophy is necessary for AI...We need to work more on the
>>> foundations, to understand whether we are
>>> going in the right direction
>>>
>>>
>>> More or less perfectly said. While I can see that a majority of people
>>> here don't want it,  actually philosophy, (which should be scientifically
>>> based), is essential for AGI, precisely as Vlad says - to decide what are
>>> the proper directions and targets for AGI. What is creativity? Intelligence?
>>> What are the kinds of problems an AGI should be dealing with? What kind(s)
>>> of knowledge representation are necessary? Is language necessary? What forms
>>> should concepts take? What kinds of information structures, eg networks,
>>> should underlie them? What kind(s) of search are necessary? How do analogy
>>> and metaphor work? Is embodiment necessary? etc etc.   These are all matters
>>> for what is actually philosophical as well as scientific as well as
>>> technological/engineering discussion.  They tend to be often  more
>>> philosophical in practice because these areas are so vast that they can't be
>>> neatly covered  - or not at present - by any scientific.
>>> experimentally-backed theory.
>>>
>>> If your philosophy is all wrong, then the chances are v. high that your
>>> engineering work will be a complete waste of time. So it's worth considering
>>> whether your personal AGI philosophy and direction are viable.
>>>
>>> And that is essentially what the philosophical discussions here have all
>>> been about - the proper *direction* for AGI efforts to take. Ben has
>>> mischaracterised these discussions. No one - certainly not me - is objecting
>>> to the *feasibility* of AGI. Everyone agrees that AGI in one form or other
>>> is indeed feasible,  though some (and increasingly though by no means fully,
>>> Ben himself) incline to robotic AGI. The arguments are mainly about
>>> direction, not feasibility.
>>>
>>> (There is a separate, philosophical discussion,  about feasibility in a
>>> different sense -  the lack of  a culture of feasibility, which is perhaps,
>>> subconsciously what Ben was also referring to  -  no one, but no one, in
>>> AGI, including Ben,  seems willing to expose their AGI ideas and proposals
>>> to any kind of feasibility discussion at all  -  i.e. how can this or that
>>> method solve any of the problem of general intelligence? This is what Steve
>>> R has pointed to recently, albeit IMO in a rather confusing way. )
>>>
>>> So while I recognize that a lot of people have an antipathy to my
>>> personal philosoophising, one way or another, you can't really avoid
>>> philosophising, unless you are, say, totally committed to just one approach,
>>> like Opencog. And even then...
>>>
>>> P.S. Philosophy is always a matter of (conflicting) opinion. (Especially,
>>> given last night's exchange, philosophy of science itself).
>>>
>>>
>>>
>>>
>>>
>>>
>>> -------------------------------------------
>>> agi
>>> Archives: https://www.listbox.com/member/archive/303/=now
>>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>> Powered by Listbox: http://www.listbox.com
>>>
>>
>>
>>
>> --
>> Ben Goertzel, PhD
>> CEO, Novamente LLC and Biomind LLC
>> Director of Research, SIAI
>> [EMAIL PROTECTED]
>>
>> "Nothing will ever be attempted if all possible objections must be first
>> overcome "  - Dr Samuel Johnson
>>
>>
>>
>>  ------------------------------
>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com/>
>>
>
>  ------------------------------
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to