Mike, I beg to differ and to agree on various points. I differ on the 
perspective of learning. Typical learning processes within humans involve the 
generic functionality of continuous deabstraction, classification, association, 
prioritization, storage, and recall, and perhaps a few more.

Where I would agree is how the schema of Learning (not just the process) would 
include a sub-schema of Compression, probably in contexts of storage and 
retrieval. In itself -except for continuous deabstraction - these functions are 
already performed in a myriad of computational applications.

Continuous deabstraction may be hard-coded as a computational routine. The 
theory exists. In some instances, this may qualify as AI, in particular machine 
learning.

However, what has been realized by many researchers the world over is that a 
competency in deabstraction it still is not enough to evolve AI to a level of 
exponential and growth far beyond humans.  I think, this problem persists 
because of the linear tendency involved with all technological progress. 
Someone else said exactly such a  linear thing, by stating how progress is a 
matter of progressive steps.

In my humble view (because we know so little) that is where 97% of current AI 
to AGI designers go wrong. How does anyone invent a computational architecture 
that is limited by one's own linear thinking? There I would agree with your 
philosophical premise.

The need for radically (perhaps new) thinking was previously stated on this 
forum. We cannot totally relay on past knowledge. We need to approach the 
evolution of an eventual solution from another worldly perspective. In a way, 
designers of AGI need to have an ability to stop being human. It;s mad, I know 
it. But, I think, that's what it would take, a functional madness when compared 
to the R&D norms.

Ben seems to have such a functional madness (no insult intended, but rather a 
compliment).  But even Ben with his super intelligence and vast knowledge of so 
many things, still hasn't figured this out yet. I think it's because the exact 
point is that it cannot be figured out first. We love figuring things out, 
don't we?

Such designers need to have a gift for visualizing such an evolutionary 
architecture that requires no human intelligence after the meta-architectural 
fact (probably up to 6 layers deep for starters) for its own development.

I appreciate Jim Bromer's labeling of my musings as madness, or fantastical. It 
means I'm on the right track. At least, I've the prospect of discovering a new 
portal, or similar, towards AGI enablers. My view is that those who persist in 
their linearized, computer-based thinking, no matter how eloquent, or "complex 
adaptive", or advanced, never will. It's a matter of the inherent quality of 
the mind first, before relying on the superiority of the mind alone.

As such, it's an existential challenge of Homo Sapiens proportions. We cannot 
even seemingly tolerate diversity on this forum without resorting to insults 
and social ridicule. How could we ever hope to design an architecture (let 
alone some knowledge-based computer routine) to set up the stage for enabling 
diversity?

In my opinion, within the realm of realizing a functional version of AGI there 
simply is no room for an intellectual, class struggle. The higher intelligence 
would not allow that to be. This social "need", which continuously hampers this 
forum, is the very death of it. We seemingly are the AGI in action, to be 
observed.

As humans, we all succumb to such carnal and humanness characteristics, to need 
and want all the time. We tend to be consumers, not prophetic designers and 
builders. With all due respect, I see no superiority here in any of us.

The art and science of AGI, has relevance. Einstein reminded us as such. What 
seems to be lacking most is not the rigor of science, but the creative beauty 
of art.

Purpose always goes to motive.

Robert Benjamin


Mike Archbold via AGI <[email protected]>
Sent: Wednesday, 10 October 2018 12:47 AM
To: AGI
Subject: Re: [agi] Compressed Algorithms at can work on compressed data.

The fascinating thing for me about this discussion is the notion that
when we talk about compression, it is just the psychological
equivalent of learning an idea. In philosophy it is like determining
what is essential, universal. In old AI it would be like learning the
rules. It's generalization.

Whenever anybody wrote a program or manual procedure anywhere, it was
compression of the circumstances into some kind of generality that
could be expressed in as simple a program as possible. So, the point I
am making is that this is not something limited to AGI.

On 10/9/18, Stefan Reich via AGI <[email protected]> wrote:
> This is such a weird statement. Like you try to make the human look stupid,
> but it is really smarter for AI production to have smart humans. I kind of
> conclude you are not actually in the AI game yourself.
>
> On Mon, 8 Oct 2018 at 18:03, Matt Mahoney via AGI <[email protected]>
> wrote:
>
>>
>>
>> On Mon, Oct 8, 2018, 9:44 AM Stefan Reich via AGI <[email protected]>
>> wrote:
>>
>>>
>>>
>>> Matt Mahoney via AGI <[email protected]> schrieb am So., 7. Okt. 2018
>>> 03:25:
>>>
>>>> I understand the desire to understand what an AGI knows. But that makes
>>>> you smarter than the AGI. I don't think you want that.
>>>>
>>>
>>> Sure I want that!
>>>
>>
>> No you don't. It would be like writing a chess program that you could
>> always beat.
>>
>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
>> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
>> participants <https://agi.topicbox.com/groups/agi/members> + delivery
>> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
>> <https://agi.topicbox.com/groups/agi/T55454c75265cabe2-Ma6f01d88c4758d17dc492263>
>>
>
>
> --
> Stefan Reich
> BotCompany.de // Java-based operating systems

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T55454c75265cabe2-M6c48d3d0d927022be2779cf0
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to