Ben,

I like your idea of permuting the principle and kicking it back and forth.
In that spirit...

On Mon, Dec 24, 2012 at 3:27 PM, Ben Goertzel <[email protected]> wrote:

>
>
> *Steve's AGI Architectural Test:*
>>
>> Everyone (including even me) agrees that AGIs won't need to simulate the
>> inner working of neurons. However, any prospective AGI platform absolutely
>> **MUST** be capable of performing substantially all of the information
>> processing functions that have been observed in neurons. Sure, some of
>> these may prove to be unnecessary, but we can't now determine which are
>> essential, and which are superfluous. My dP/dt observation is just one of
>> many such information processing functions, which also includes other
>> basics, like the retrograde flow of information. Once your platform is
>> "playing with a full deck" then clever design might conceivably succeed.
>> However, until then, you can never ever rationally hope to succeed, not in
>> a few years, and not in a few centuries.
>>
>> Right? If not, then why not?
>>
>> Steve
>>
>
>
> Steve,
>
> OpenCog aims to embody the key functions of the human *mind*, but doesn't
> have processing elements closely analogous to neurons....
>

Perhaps I misunderstand OpenCog, but in developing Bayesian probabilities
of things, those probabilities ARE analogous to the outputs of neurons.
Sure you are free to utilize different mechanisms, e.g. AI image
recognition, but somewhere the same sorts of computations must be done to
reach conclusions...

You say
>
> "any prospective AGI platform absolutely **MUST** be capable of performing
> substantially all of the information processing functions that have been
> observed in neurons."
>
> but I think the truth is more like
>

I am adjusting your statement to add what I think is missing:

>
> "any prospective AGI platform absolutely **MUST** be capable of *rapidly
> learning to* perform*ing* substantially all of the high-level cognitive
> information processing functions that have been observed in human
> mind/brains *without carefully ignoring areas (like the hypothalamus)
> that perform functions that appear incompatible with the platform.*"
>

It is my own suspicion (I'll write a page or so about this if you would
like) that the two highlighted areas above are closely connected in theory,
though widely disparate in presentation. I suspect that success on either
of these areas will quickly lead to success in the other area. Hence, it is
my hope that you will embrace at lease one of these two areas.

In your response, I suggest removing the formatting in the above and adding
your own new formatting.

Steve



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to