Thanks Rob,

I do have a scale: representational value = (sum of all variables) /
 redundancy.
This is computed per level of pattern ) whole pattern ) level of search )
system.
That's a general criterion, not scheme- or task-specific.

There is no separate mapping, just multi-level search: cross-comparison
among patterns.
And there is no need for randomizing, this search is driven by inputs and
feedback.
I guess your "orientation" is my "motor feedback": speed-up | slow-down |
direction-reversal for new inputs, in each dimension of the input flow.
So, yes, this hierarchy should be fluid / dynamic. But this dynamics is
defined by the most general principles, not some extraneous schema.

Appreciate your interest!
https://github.com/boris-kz/CogAlg


On Mon, Jun 11, 2018 at 3:11 PM Nanograte Knowledge Technologies via AGI <
[email protected]> wrote:

> Boris
>
> I'd like to throw a few ideas around to see if they gel.
>
> From my perspective, what your hierarchical pattern-management process is
> describing reminded me of flow-characteristics of a researched, meta model
> for sustainable competency I once spent time on. I think this has
> significance for intelligence-based systems, as my view would be to aim for
> a system that exhibits optimal efficiency (effective complexity). For such
> a system to constantly orientate itself, you'll have to be able to provide
> a standard intelligence scale and a pseudo-random driven, dynamic, mapping
> process.
>
> I think, such a scale and map would have relevance for accuracy in
> situational decision making. [I'm slowly pulling the thread through to a
> conclusion here.] You may elect to position scaling and mapping schemas as
> clusters of higher-strategic management functionality. Again, hierarchy and
> levels go hand in glove. However, hierarchies may also become relative,
> depending on the schema employed. Up can be done, and down can be up and
> everything may happen in between. In this sense, hierarchy could become a
> gateway to state and state the point of departure for every, single moment,
> as timespace continuum, as systems event.
>
> To my mind, such a system would be able to perform self adaptation aka
> autonomous behavior. If my view should have merit, I would suggest that
> there are a number of essential components still missing from your design
> schema.  Such system components may be related to your thinking on levels
> of pattern recognition, and suitable to your notion of hierarchical
> increments from the lowest (meta) level.
>
> The wood for the trees, and the RNA for the fruit, kinda thing.
>
> Best
>
> Rob
> ------------------------------
> *From:* Boris Kazachenko via AGI <[email protected]>
> *Sent:* Monday, 11 June 2018 2:41 PM
> *To:* [email protected]
> *Subject:* Re: [agi] Anyone interested in sharing your projects / data
> models
>
>
>
> ​AGI's bottleneck must be in *learning*, any​one who focuses on something
> else is barking under the wrong tree...
>
>
> Not just a bottleneck, it's the very definition of GI, the fitness /
> objective function of intelligence.
> Specifically, unsupervised / value-free learning, AKA pattern recognition. 
> Supervision
> and reinforcement are simple add-ons.
> Anything else is a distraction. "Problem solving" is meaningless: there is
> no such thing as "problem" in general, except as defined above.
>
>
> Now think about this:  we already have the weapon (deep learning) which is
> capable of learning *arbitrary* function mappings.
>
>
> Yeah, we have some random hammer, so every problem looks like a nail.
>
>
> We are facing a learning problem which we already know the formal
> definition of.
>
>
> No, you don't. There no constructive theory behind ANN, it's just a hack
> vaguely inspired by another hack: human brain.
> Which is a misshapen kludge, and this list makes it  perfectly obvious.
> Sorry, can't help it.
>
> Anyway, this is my alternative:  https://github.com/boris-kz/CogAlg
>
>
>
>
> On Mon, Jun 11, 2018 at 3:41 AM YKY via AGI <[email protected]> wrote:
>
> On Mon, Jun 11, 2018 at 2:55 PM, MP via AGI <[email protected]> wrote:
>
> Right. Most of them work off of a variant of depth first search which
> would usually lead to a combinatorial explosion, or some kind of heuristic
> to cut down on search time at some other cognitive expense...
>
> Not to mention most of them run off human made rules, rather than learning
> it for themselves through subjective experience.
>
> I highly doubt even Murray’s bizarre system subjectively learns. There are
> hand coded concepts in the beginning of his trippy source code.
>
> How can your system overcome this? How can it subjectively learn without
> human intervention?
>
>
>
> ​AGI's bottleneck must be in *learning*, any​one who focuses on something
> else is barking under the wrong tree...
>
> Now think about this:  we already have the weapon (deep learning) which is
> capable of learning *arbitrary* function mappings.  We are facing a
> learning problem which we already know the formal definition of.  So we
> just need to apply that weapon to the problem.  How hard can that be?
>
> Well, it turns out it's very hard to understand the abstract (algebraic)
> structure of logic, that took me a long time to master, but now I have a
> pretty clear view of its structure.
>
> Inductive learning in logic is done via some kind of depth-first search in
> the space of logic formulas, as you described.  The neural network can also
> perform a search in the weight space, maximizing some objective functions.
> So the weight space must somehow *correspond* to the space of logic
> formulas.
>
> In my proposal (that has just freshly failed), I encoded the formulas as
> the output of the neural network.  That is an example, albeit I neglected
> the first-order logic aspects.
>
> Does this answer your question?
>
> And thanks for asking, because that helps me to clarify my thinking as
> well... ☺
>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups> Permalink
> <https://agi.topicbox.com/groups/agi/T731509cdd81e3f5f-M4da0db38a479166b632b946b>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T731509cdd81e3f5f-M1306bed8c1f5b8ab6b0646b9
Delivery options: https://agi.topicbox.com/groups

Reply via email to