Wow, sorry about that. I am using firefox and had no problems. The
site was just the first reference I was able to find using  google.

Wikipedia references the same fact:

http://en.wikipedia.org/wiki/Feedforward_neural_network#Multi-layer_perceptron

On Tue, Aug 19, 2008 at 3:42 AM, Brad Paulsen <[EMAIL PROTECTED]> wrote:
> Abram,
>
> Just FYI... When I attempted to access the Web page in your message,
> http://www.learnartificialneuralnetworks.com/ (that's without the
> "backpropagation.html" part), my virus checker, AVG, blocked the attempt
> with a message similar to the following:
>
> Threat detected!
> Virus found: JS/Downloader.Agent
> Detected on open
>
> Quarantined
>
> On a second attempt, I also got the IE 7.0 warning banner:
>
> "This website wants to run the following add-on: "Microsoft Data Access -
> Remote Data Services Dat...' from 'Microsoft Corporation'.  If you trust the
> website and the add-on and want to allow it to run, click..." (of course, I
> didn't click).
>
> This time, AVG gave me the option to "heal" the virus.  I took this option.
>
> It may be nothing, but it also could be a "drive by" download attempt of
> which the owners of that site may not be aware.
>
> Cheers,
>
> Brad
>
>
>
> Abram Demski wrote:
>>
>> Mike,
>>
>> There are at least 2 ways this can happen, I think. The first way is
>> that a mechanism is theoretically proven to be "complete", for some
>> less-than-sufficient formalism. The best example of this is one I
>> already mentioned: the neural nets of the nineties (specifically,
>> feedforward neural nets with multiple hidden layers). There is a
>> completeness result associated with these. I quote from
>> http://www.learnartificialneuralnetworks.com/backpropagation.html :
>>
>> "Although backpropagation can be applied to networks with any number
>> of layers, just as for networks with binary units it has been shown
>> (Hornik, Stinchcombe, & White, 1989; Funahashi, 1989; Cybenko, 1989;
>> Hartman, Keeler, & Kowalski, 1990) that only one layer of hidden units
>> su ces to approximate any function with finitely many discontinuities
>> to arbitrary precision, provided the activation functions of the
>> hidden units are non-linear (the universal approximation theorem). In
>> most applications a feed-forward network with a single layer of hidden
>> units is used with a sigmoid activation function for the units. "
>>
>> This sort of thing could have contributed to the 50 years of
>> less-than-success you mentioned.
>>
>> The second way this phenomenon could manifest is more a personal fear
>> than anything else. I am worried that there really might be partial
>> principles of mind that could seem to be able to do everything for a
>> time. The possibility is made concrete for me by analogies to several
>> smaller domains. In linguistics, the grammar that we are taught in
>> high school does almost everything. In logic, 1st-order systems do
>> almost everything. In sequence learning, hidden markov models do
>> almost everything. So, it is conceivable that some AGI method will be
>> missing something fundamental, yet seem for a time to be
>> all-encompassing.
>>
>> On Mon, Aug 18, 2008 at 5:58 AM, Mike Tintner <[EMAIL PROTECTED]>
>> wrote:
>>>
>>> Abram:I am worried-- worried that an AGI system based on anything less
>>> than
>>> the one most powerful logic will be able to fool AGI researchers for a
>>> long time into thinking that it is capable of general intelligence.
>>>
>>> Can you explain this to me? (I really am interested in understanding your
>>> thinking). AGI's have a roughly 50 year record of total failure. They
>>> have
>>> never shown the slightest sign of general intelligence - of being able to
>>> cross domains. How do you think they will or could fool anyone?
>>>
>>>
>>>
>>> -------------------------------------------
>>> agi
>>> Archives: https://www.listbox.com/member/archive/303/=now
>>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>>> Modify Your Subscription:
>>> https://www.listbox.com/member/?&;
>>> Powered by Listbox: http://www.listbox.com
>>>
>>
>>
>> -------------------------------------------
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to