Mike,

But this is horrible! If what you are saying is true, then research
will barely progress.

On Mon, Aug 18, 2008 at 11:46 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Abram,
>
> The key distinction here is probably that some approach to AGI may be widely
> accepted as having great *promise*. That has certainly been the case,
> although I doubt actually that it could happen again. There were also no
> robots of note in the past. Personally, I can't see any approach being
> accepted  now - and the general responses of this forum, I think, support
> this - until it actually delivers on some form of GI.
>
> Mike,
>
> There are at least 2 ways this can happen, I think. The first way is
> that a mechanism is theoretically proven to be "complete", for some
> less-than-sufficient formalism. The best example of this is one I
> already mentioned: the neural nets of the nineties (specifically,
> feedforward neural nets with multiple hidden layers). There is a
> completeness result associated with these. I quote from
> http://www.learnartificialneuralnetworks.com/backpropagation.html :
>
> "Although backpropagation can be applied to networks with any number
> of layers, just as for networks with binary units it has been shown
> (Hornik, Stinchcombe, & White, 1989; Funahashi, 1989; Cybenko, 1989;
> Hartman, Keeler, & Kowalski, 1990) that only one layer of hidden units
> su ces to approximate any function with finitely many discontinuities
> to arbitrary precision, provided the activation functions of the
> hidden units are non-linear (the universal approximation theorem). In
> most applications a feed-forward network with a single layer of hidden
> units is used with a sigmoid activation function for the units. "
>
> This sort of thing could have contributed to the 50 years of
> less-than-success you mentioned.
>
> The second way this phenomenon could manifest is more a personal fear
> than anything else. I am worried that there really might be partial
> principles of mind that could seem to be able to do everything for a
> time. The possibility is made concrete for me by analogies to several
> smaller domains. In linguistics, the grammar that we are taught in
> high school does almost everything. In logic, 1st-order systems do
> almost everything. In sequence learning, hidden markov models do
> almost everything. So, it is conceivable that some AGI method will be
> missing something fundamental, yet seem for a time to be
> all-encompassing.
>
> On Mon, Aug 18, 2008 at 5:58 AM, Mike Tintner <[EMAIL PROTECTED]>
> wrote:
>>
>> Abram:I am worried-- worried that an AGI system based on anything less
>> than
>> the one most powerful logic will be able to fool AGI researchers for a
>> long time into thinking that it is capable of general intelligence.
>>
>> Can you explain this to me? (I really am interested in understanding your
>> thinking). AGI's have a roughly 50 year record of total failure. They have
>> never shown the slightest sign of general intelligence - of being able to
>> cross domains. How do you think they will or could fool anyone?
>>
>>
>>
>> -------------------------------------------
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription:
>> https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to