Samantha,

On 10/19/08, Samantha Atkins <[EMAIL PROTECTED]> wrote:
>
> This sounds good to me.  I am much more drawn to topic #1.  Topic #2 I have
> seen discussed recursively and in dozens of variants multiple places.  The
> only thing I will add to Topic #2 is that I very seriously doubt current
> human intelligence individually or collectively is sufficient to address or
> meaningfully resolve or even crisply articulate such questions.


We are in absolute agreement that revolution rather than evolution is
necessary to advance. Aside from the specific technique, things like Reverse
Reductio ad Absurdum shows that, for example, that intractable disputes
absolutely MUST include a commonly held false assumption. This means, for
example, that if you take EITHER side in the abortion debate, then you
absolutely MUST hold a false assumption. The only hope is broad societal
education that flies in the face of nearly every religion, which will never
happen.

Without that impossible education, a truly "successful" AGI would have ~half
of the world's population bent on its immediate destruction, and not more
than 100 people would even understand what it said. Note that if you take
either side in the abortion debate, that you will NOT be one of those 100
people. Who could you find to even maintain such a machine, and who would
ever follow such a machine?

Much more is accomplished by actually "looking into the horse's mouth" than
> philosophizing endlessly.


Here, you think that AGI efforts will point the way to freeing man from his
collective maddness. Given the constraints explained above, I just don't see
how this is possible.

Another entry for Ben's List:

*Impossible Expectations: Man has many issues and problems for which he has
no good answers. Given man's inductive abilities, this comes NOT because of
any inability to imagine the correct answers, but comes instead because
either no such answers exist, or because man rejects the correct answers
when they are placed before him. Obviously, AGI cannot help either of these
situations. *


Steve Richfield
===============

Ben Goertzel wrote:
>
>>
>> Hi all,
>>
>> I have been thinking a bit about the nature of conversations on this list.
>>
>> It seems to me there are two types of conversations here:
>>
>> 1)
>> Discussions of how to design or engineer AGI systems, using current
>> computers, according to designs that can feasibly be implemented by
>> moderately-sized groups of people
>>
>> 2)
>> Discussions about whether the above is even possible -- or whether it is
>> impossible because of weird physics, or poorly-defined special
>> characteristics of human creativity, or the so-called "complex systems
>> problem", or because AGI intrinsically requires billions of people and
>> quadrillions of dollars, or whatever
>>
>> Personally I am pretty bored with all the conversations of type 2.
>>
>> It's not that I consider them useless discussions in a grand sense ...
>> certainly, they are valid topics for intellectual inquiry.
>> But, to do anything real, you have to make **some** decisions about what
>> approach to take, and I've decided long ago to take an approach of trying to
>> engineer an AGI system.
>>
>> Now, if someone had a solid argument as to why engineering an AGI system
>> is impossible, that would be important.  But that never seems to be the
>> case.  Rather, what we hear are long discussions of peoples' intuitions and
>> opinions in this regard.  People are welcome to their own intuitions and
>> opinions, but I get really bored scanning through all these intuitions about
>> why AGI is impossible.
>>
>> One possibility would be to more narrowly focus this list, specifically on
>> **how to make AGI work**.
>>
>> If this re-focusing were done, then philosophical arguments about the
>> impossibility of engineering AGI in the near term would be judged **off
>> topic** by definition of the list purpose.
>>
>> Potentially, there could be another list, something like "agi-philosophy",
>> devoted to philosophical and weird-physics and other discussions about
>> whether AGI is possible or not.  I am not sure whether I feel like running
>> that other list ... and even if I ran it, I might not bother to read it very
>> often.  I'm interested in new, substantial ideas related to the in-principle
>> possibility of AGI, but not interested at all in endless philosophical
>> arguments over various peoples' intuitions in this regard.
>>
>> One fear I have is that people who are actually interested in building
>> AGI, could be scared away from this list because of the large volume of
>> anti-AGI philosophical discussion.   Which, I add, almost never has any new
>> content, and mainly just repeats well-known anti-AGI arguments (Penrose-like
>> physics arguments ... "mind is too complex to engineer, it has to be
>> evolved" ... "no one has built an AGI yet therefore it will never be done"
>> ... etc.)
>>
>> What are your thoughts on this?
>>
>> -- Ben
>>
>>
>>
>>
>> On Wed, Oct 15, 2008 at 10:49 AM, Jim Bromer <[EMAIL PROTECTED]<mailto:
>> [EMAIL PROTECTED]>> wrote:
>>
>>    On Wed, Oct 15, 2008 at 10:14 AM, Ben Goertzel <[EMAIL PROTECTED]
>>    <mailto:[EMAIL PROTECTED]>> wrote:
>>    >
>>    > Actually, I think COMP=false is a perfectly valid subject for
>>    discussion on
>>    > this list.
>>    >
>>    > However, I don't think discussions of the form "I have all the
>>    answers, but
>>    > they're top-secret and I'm not telling you, hahaha" are
>>    particularly useful.
>>    >
>>    > So, speaking as a list participant, it seems to me this thread
>>    has probably
>>    > met its natural end, with this reference to proprietary
>>    weird-physics IP.
>>    >
>>    > However, speaking as list moderator, I don't find this thread so
>>    off-topic
>>    > or unpleasant as to formally kill the thread.
>>    >
>>    > -- Ben
>>
>>    If someone doesn't want to get into a conversation with Colin about
>>    whatever it is that he is saying, then they should just exercise some
>>    self-control and refrain from doing so.
>>
>>    I think Colin's ideas are pretty far out there. But that does not mean
>>    that he has never said anything that might be useful.
>>
>>    My offbeat topic, that I believe that the Lord may have given me some
>>    direction about a novel approach to logical satisfiability that I am
>>    working on, but I don't want to discuss the details about the
>>    algorithms until I have gotten a chance to see if they work or not,
>>    was never intended to be a discussion about the theory itself.  I
>>    wanted to have a discussion about whether or not a good SAT solution
>>    would have a significant influence on AGI, and whether or not the
>>    unlikely discovery of an unexpected breakthrough on SAT would serve as
>>    rational evidence in support of the theory that the Lord helped me
>>    with the theory.
>>
>>    Although I am skeptical about what I think Colin is claiming, there is
>>    an obvious parallel between his case and mine.  There are relevant
>>    issues which he wants to discuss even though his central claim seems
>>    to private, and these relevant issues may be interesting.
>>
>>    Colin's unusual reference to some solid path which cannot be yet
>>    discussed is annoying partly because it so obviously unfounded.  If he
>>    had the proof (or a method), then why isn't he writing it up (or
>>    working it out).  A similar argument was made against me by the way,
>>    but the difference was that I never said that I had the proof or
>>    method.  (I did say that you should get used to a polynomial time
>>    solution to SAT but I never said that I had a working algorithm.)
>>
>>    My point is that even though people may annoy you with what seems like
>>    unsubstantiated claims, that does not disqualify everything they have
>>    said. That rule could so easily be applied to anyone who posts on that
>>    list.
>>
>>    Jim Bromer
>>
>>
>>    -------------------------------------------
>>    agi
>>    Archives: https://www.listbox.com/member/archive/303/=now
>>    RSS Feed: https://www.listbox.com/member/archive/rss/303/
>>    Modify Your Subscription: https://www.listbox.com/member/?&;
>>    <https://www.listbox.com/member/?&;>
>>    Powered by Listbox: http://www.listbox.com
>>
>>
>>
>>
>> --
>> Ben Goertzel, PhD
>> CEO, Novamente LLC and Biomind LLC
>> Director of Research, SIAI
>> [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>
>>
>> "Nothing will ever be attempted if all possible objections must be first
>> overcome "  - Dr Samuel Johnson
>>
>>
>> ------------------------------------------------------------------------
>> *agi* | Archives <https://www.listbox.com/member/archive/303/=now> <
>> https://www.listbox.com/member/archive/rss/303/> | Modify <
>> https://www.listbox.com/member/?&;> Your Subscription       [Powered by
>> Listbox] <http://www.listbox.com>
>>
>>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to