Ben,

On Mon, Dec 24, 2012 at 8:39 PM, Ben Goertzel <[email protected]> wrote:

>
> Steve,
>
> I am adjusting your statement to add what I think is missing:
>>
>>>
>>> "any prospective AGI platform absolutely **MUST** be capable of *rapidly
>>> learning to* perform*ing* substantially all of the high-level cognitive
>>> information processing functions that have been observed in human
>>> mind/brains *without carefully ignoring areas (like the hypothalamus)
>>> that perform functions that appear incompatible with the platform.*"
>>>
>>
>>
> The hypothalamus is not a function, but rather a system.
>

The same can be said about ANY AGI-like capabilities - they all parallel
specific regions of the brain. You are literally betting the near-term
future of AGI that the relative simple learning that underlies process
control is unrelated to the more complex learning that underlies cognitive
function. I suspect that you are trying to discover a complex principle
buried in incredible complexity, when the same principles are available for
examination without the embedded complexity.

So to accord with my statement you would need to enumerate which of the
> human mind's high-level cognitive functions you think OpenCog (or other AGi
> designs) ignores due to not adequately including sufficiently
> "hypothalamus-like" components or processes...
>

I suspect that coordination of all sorts, where effectors must be operated
in sometimes radical ways to achieve a desired rmovement. The hypothalamus
is concerned with mostly chemical processes that are slow enough to
analyze, whereas learned robotic coordination is more difficult, but is
amenable to deep analysis. The challenge is to design a system that learns
how to do the deep analysis, rather than performing the deep analysis
yourself and then building it into the robotic system.

>
> Also, I don't agree at all that an AGi must be capable fo rapidly learning
> to perform all its high-level functions.  A human mind learns to cognize
> over a period of years, and does so via a complex combination of learning
> with the scheduled/triggered unveiling of genetically encoded
> capabilities....   Similarly I think it's OK if an AGi learns its cognitive
> capabilities over a period of years, and if it leverages some appropriately
> in-built capabilities.
>

I absolutely agree regarding relative speeds. However, most present methods
of ML are WAY too slow. I was just emphasizing the need to move beyond
present ML methods.

However, I still believe that function needs to be learned or discovered,
rather than programmed.


> A human mind is not a tabula rasa, and nor need an AGI mind be...
>

OK, so here is the statement modifies as per our last go-round:

"any prospective AGI platform absolutely **MUST** be capable of *learning
at biologically comparable speeds to* perform substantially all of the
high-level cognitive information processing functions that have been
observed in human mind/brains *without carefully ignoring functions that
appear incompatible with the platform, like spontaneously discovering the
nature of the world in which they 'live'.*"

Is this OK yet?

Steve



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to