Hi Boris

Thanks for clarifying it in terms I could understand. The first hurdle always 
seems to be one of semantics. I did not explain the notion of schema well 
enough. Schema, to me, is synonymous with an emerging cluster and not the 
traditional data schema. For example, a specific pattern of results may emerge  
as a relatively-real schema. The relatively-real construct operates within 
context of a system of competency.

In my mind, schema would primarily (hierarchically) be associated with 
consciousness. I cannot see how a machine can be said to be learning, if it has 
no awareness of what, how and why it is learning. This association to a notion 
of machine consciousness would be in the sense of a highly-complex reference 
point contributing (input/output) to formulating an internal perspective of the 
external environment. I think that is partly how language (as applied learning) 
generally develops in humans.

Further, the "mapping" I was referring to is also part of consciousness, which 
would evolve as a geometric construct enabling all "life" functionality. 
Bearing in mind,given X factors, any schema could be associated to any level at 
any point in time, even simultaneously. I would imagine instantiated schemata 
would be resident in memory only.

The mapping architecture, which I referred to, would probably contain the 
algorithms for pseudo-random associations. I use the term pseudo-randomness in 
the strictest quantum terms, meaning as a highly-abstract input/output.

I generally maintained a view to generate such machine functionality as a 
forward-engineering principle. However, you have shown the way towards the 
likelyhood of a concurrent-engineering (forward, reverse, and re-engineering) 
approach. This seems aligned to the potential contained within a last-mile, 
tacit-knowledge engineering, or de-abstraction method set, in the role of 
knowledge codifier.

I'm sure this is all old hat to you, but I'd appreciate your views on the 
probable application of the points I raised.

Rgds

Rob
________________________________
From: Boris Kazachenko via AGI <agi@agi.topicbox.com>
Sent: Monday, 11 June 2018 11:53 PM
To: agi@agi.topicbox.com
Subject: Re: [agi] Anyone interested in sharing your projects / data models

Thanks Rob,

I do have a scale: representational value = (sum of all variables) / redundancy.
This is computed per level of pattern ) whole pattern ) level of search ) 
system.
That's a general criterion, not scheme- or task-specific.

There is no separate mapping, just multi-level search: cross-comparison among 
patterns.
And there is no need for randomizing, this search is driven by inputs and 
feedback.
I guess your "orientation" is my "motor feedback": speed-up | slow-down | 
direction-reversal for new inputs, in each dimension of the input flow.
So, yes, this hierarchy should be fluid / dynamic. But this dynamics is defined 
by the most general principles, not some extraneous schema.

Appreciate your interest!
https://github.com/boris-kz/CogAlg


On Mon, Jun 11, 2018 at 3:11 PM Nanograte Knowledge Technologies via AGI 
<agi@agi.topicbox.com<mailto:agi@agi.topicbox.com>> wrote:
Boris

I'd like to throw a few ideas around to see if they gel.

From my perspective, what your hierarchical pattern-management process is 
describing reminded me of flow-characteristics of a researched, meta model for 
sustainable competency I once spent time on. I think this has significance for 
intelligence-based systems, as my view would be to aim for a system that 
exhibits optimal efficiency (effective complexity). For such a system to 
constantly orientate itself, you'll have to be able to provide a standard 
intelligence scale and a pseudo-random driven, dynamic, mapping process.

I think, such a scale and map would have relevance for accuracy in situational 
decision making. [I'm slowly pulling the thread through to a conclusion here.] 
You may elect to position scaling and mapping schemas as clusters of 
higher-strategic management functionality. Again, hierarchy and levels go hand 
in glove. However, hierarchies may also become relative, depending on the 
schema employed. Up can be done, and down can be up and everything may happen 
in between. In this sense, hierarchy could become a gateway to state and state 
the point of departure for every, single moment, as timespace continuum, as 
systems event.

To my mind, such a system would be able to perform self adaptation aka 
autonomous behavior. If my view should have merit, I would suggest that there 
are a number of essential components still missing from your design schema.  
Such system components may be related to your thinking on levels of pattern 
recognition, and suitable to your notion of hierarchical increments from the 
lowest (meta) level.

The wood for the trees, and the RNA for the fruit, kinda thing.

Best

Rob
________________________________
From: Boris Kazachenko via AGI 
<agi@agi.topicbox.com<mailto:agi@agi.topicbox.com>>
Sent: Monday, 11 June 2018 2:41 PM
To: agi@agi.topicbox.com<mailto:agi@agi.topicbox.com>
Subject: Re: [agi] Anyone interested in sharing your projects / data models


​AGI's bottleneck must be in learning, any​one who focuses on something else is 
barking under the wrong tree...

Not just a bottleneck, it's the very definition of GI, the fitness / objective 
function of intelligence.
Specifically, unsupervised / value-free learning, AKA pattern recognition. 
Supervision and reinforcement are simple add-ons.
Anything else is a distraction. "Problem solving" is meaningless: there is no 
such thing as "problem" in general, except as defined above.

Now think about this:  we already have the weapon (deep learning) which is 
capable of learning arbitrary function mappings.

Yeah, we have some random hammer, so every problem looks like a nail.

We are facing a learning problem which we already know the formal definition of.

No, you don't. There no constructive theory behind ANN, it's just a hack 
vaguely inspired by another hack: human brain.
Which is a misshapen kludge, and this list makes it  perfectly obvious.
Sorry, can't help it.

Anyway, this is my alternative:  https://github.com/boris-kz/CogAlg



On Mon, Jun 11, 2018 at 3:41 AM YKY via AGI 
<agi@agi.topicbox.com<mailto:agi@agi.topicbox.com>> wrote:
On Mon, Jun 11, 2018 at 2:55 PM, MP via AGI 
<agi@agi.topicbox.com<mailto:agi@agi.topicbox.com>> wrote:
Right. Most of them work off of a variant of depth first search which would 
usually lead to a combinatorial explosion, or some kind of heuristic to cut 
down on search time at some other cognitive expense...

Not to mention most of them run off human made rules, rather than learning it 
for themselves through subjective experience.

I highly doubt even Murray’s bizarre system subjectively learns. There are hand 
coded concepts in the beginning of his trippy source code.

How can your system overcome this? How can it subjectively learn without human 
intervention?


​AGI's bottleneck must be in learning, any​one who focuses on something else is 
barking under the wrong tree...

Now think about this:  we already have the weapon (deep learning) which is 
capable of learning arbitrary function mappings.  We are facing a learning 
problem which we already know the formal definition of.  So we just need to 
apply that weapon to the problem.  How hard can that be?

Well, it turns out it's very hard to understand the abstract (algebraic) 
structure of logic, that took me a long time to master, but now I have a pretty 
clear view of its structure.

Inductive learning in logic is done via some kind of depth-first search in the 
space of logic formulas, as you described.  The neural network can also perform 
a search in the weight space, maximizing some objective functions.  So the 
weight space must somehow correspond to the space of logic formulas.

In my proposal (that has just freshly failed), I encoded the formulas as the 
output of the neural network.  That is an example, albeit I neglected the 
first-order logic aspects.

Does this answer your question?

And thanks for asking, because that helps me to clarify my thinking as well... ☺

Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups> 
Permalink<https://agi.topicbox.com/groups/agi/T731509cdd81e3f5f-M1306bed8c1f5b8ab6b0646b9>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T731509cdd81e3f5f-Md97efe01bd8253d99f445904
Delivery options: https://agi.topicbox.com/groups

Reply via email to