Rob,

Yes, there can be different definitions of pattern, but only one wins in
the end. Generalization is a reduction.
My definition is a set of matching inputs. Similar to yours, except that
your "occurrence" implies binary is | is not, while my match is partial:
above-average similarity between the inputs.
And inputs must have at least two parameters: content (what) and coordinate
(where). Because the value of any prediction = precision of what *
precision of where.
The number of matching inputs is relevant: it's a compositionally-higher
level of pattern, to be compared on the next level of search.

Yes, the inputs can have any degree of complexity. But the only way for the
system to discover that complexity is to build corresponding patterns
bottom-up.
No matter how ultimately complex the input is, it consists of simple
elements. And the search for patterns must start from the simplest:
"pixels" or the limit of resolution.
That's a first level of search, which forms compositionally-higher patterns
and outputs them to the next level. Higher complexity can only be
discovered on higher levels of search.
All this is explained in my intro: https://github.com/boris-kz/CogAlg , do
you have any suggestions on how to make more accessible?



On Wed, Jun 13, 2018 at 2:31 AM Nanograte Knowledge Technologies via AGI <
agi@agi.topicbox.com> wrote:

> Boris
>
> Thanks for the feedback. Our views are conceptually similar.
>
> With regards pattern, I think various definitions are possible.
>
> For example, in general pattern is regarded as an occurrence of 2 and
> more. In other words, a one-to-many relationship could be viewed as a
> pattern. Thus, patterns do exist within simple systems.
>
> However, patterns also exist within complex systems. My view is that
> pattern is an holistic one, i.e.,  The Mandelbrot Set, and others. As such,
> pattern could be independent of how many times a similar event occurs, and
> may even be identified where the absence of events become the fingerprint
> (an approach you seemingly take). One could consider a pattern of learning,
> as a possible example of this.
>
> Comments?
>
> Rob
> ------------------------------
> *From:* Boris Kazachenko via AGI <agi@agi.topicbox.com>
> *Sent:* Wednesday, 13 June 2018 12:03 AM
> *To:* agi@agi.topicbox.com
> *Subject:* Re: [agi] Anyone interested in sharing your projects / data
> models
>
> Rob,
>
> I don't see how you ideas of schema, map, and even "consciousness" or
> "awareness" are different from my patterns.
> Which can represent anything: "input" is relative, on any level of
> complexity and generalization: "self-representation" is also relative.
> These are all learned representations, and the mechanics of learning
> should not depend on the subject, else it's not general.
> Redundant terms is a curse of philosophy and anything it touches :).
>
> So, yes, from my POV all that should emergent and "submersible", that is
> the point of strictly bottom-up learning, with subsequent feedback.
> All we need to code is scalable and recursive pattern discovery algorithm.
> And the only way to insure that it is general is to derive it from the
> very definition of pattern.
> https://github.com/boris-kz/CogAlg
>
>
> On Tue, Jun 12, 2018 at 1:45 PM Nanograte Knowledge Technologies via AGI <
> agi@agi.topicbox.com> wrote:
>
> Hi Boris
>
> Thanks for clarifying it in terms I could understand. The first hurdle
> always seems to be one of semantics. I did not explain the notion of schema
> well enough. Schema, to me, is synonymous with an emerging cluster and not
> the traditional data schema. For example, a specific pattern of results may
> emerge  as a relatively-real schema. The relatively-real construct operates
> within context of a system of competency.
>
> In my mind, schema would primarily (hierarchically) be associated with
> consciousness. I cannot see how a machine can be said to be learning, if it
> has no awareness of what, how and why it is learning. This association to a
> notion of machine consciousness would be in the sense of a highly-complex
> reference point contributing (input/output) to formulating an internal
> perspective of the external environment. I think that is partly how
> language (as applied learning) generally develops in humans.
>
> Further, the "mapping" I was referring to is also part of consciousness,
> which would evolve as a geometric construct enabling all "life"
> functionality. Bearing in mind,given X factors, any schema could be
> associated to any level at any point in time, even simultaneously. I would
> imagine instantiated schemata would be resident in memory only.
>
> The mapping architecture, which I referred to, would probably contain the
> algorithms for pseudo-random associations. I use the term pseudo-randomness
> in the strictest quantum terms, meaning as a highly-abstract input/output.
>
> I generally maintained a view to generate such machine functionality as a
> forward-engineering principle. However, you have shown the way towards the
> likelyhood of a concurrent-engineering (forward, reverse, and
> re-engineering) approach. This seems aligned to the potential contained
> within a last-mile, tacit-knowledge engineering, or de-abstraction method
> set, in the role of knowledge codifier.
>
> I'm sure this is all old hat to you, but I'd appreciate your views on the
> probable application of the points I raised.
>
> Rgds
>
> Rob
> ------------------------------
> *From:* Boris Kazachenko via AGI <agi@agi.topicbox.com>
> *Sent:* Monday, 11 June 2018 11:53 PM
> *To:* agi@agi.topicbox.com
> *Subject:* Re: [agi] Anyone interested in sharing your projects / data
> models
>
> Thanks Rob,
>
> I do have a scale: representational value = (sum of all variables) /
>  redundancy.
> This is computed per level of pattern ) whole pattern ) level of search )
> system.
> That's a general criterion, not scheme- or task-specific.
>
> There is no separate mapping, just multi-level search: cross-comparison
> among patterns.
> And there is no need for randomizing, this search is driven by inputs and
> feedback.
> I guess your "orientation" is my "motor feedback": speed-up | slow-down |
> direction-reversal for new inputs, in each dimension of the input flow.
> So, yes, this hierarchy should be fluid / dynamic. But this dynamics is
> defined by the most general principles, not some extraneous schema.
>
> Appreciate your interest!
> https://github.com/boris-kz/CogAlg
>
>
> On Mon, Jun 11, 2018 at 3:11 PM Nanograte Knowledge Technologies via AGI <
> agi@agi.topicbox.com> wrote:
>
> Boris
>
> I'd like to throw a few ideas around to see if they gel.
>
> From my perspective, what your hierarchical pattern-management process is
> describing reminded me of flow-characteristics of a researched, meta model
> for sustainable competency I once spent time on. I think this has
> significance for intelligence-based systems, as my view would be to aim for
> a system that exhibits optimal efficiency (effective complexity). For such
> a system to constantly orientate itself, you'll have to be able to provide
> a standard intelligence scale and a pseudo-random driven, dynamic, mapping
> process.
>
> I think, such a scale and map would have relevance for accuracy in
> situational decision making. [I'm slowly pulling the thread through to a
> conclusion here.] You may elect to position scaling and mapping schemas as
> clusters of higher-strategic management functionality. Again, hierarchy and
> levels go hand in glove. However, hierarchies may also become relative,
> depending on the schema employed. Up can be done, and down can be up and
> everything may happen in between. In this sense, hierarchy could become a
> gateway to state and state the point of departure for every, single moment,
> as timespace continuum, as systems event.
>
> To my mind, such a system would be able to perform self adaptation aka
> autonomous behavior. If my view should have merit, I would suggest that
> there are a number of essential components still missing from your design
> schema.  Such system components may be related to your thinking on levels
> of pattern recognition, and suitable to your notion of hierarchical
> increments from the lowest (meta) level.
>
> The wood for the trees, and the RNA for the fruit, kinda thing.
>
> Best
>
> Rob
> ------------------------------
> *From:* Boris Kazachenko via AGI <agi@agi.topicbox.com>
> *Sent:* Monday, 11 June 2018 2:41 PM
> *To:* agi@agi.topicbox.com
> *Subject:* Re: [agi] Anyone interested in sharing your projects / data
> models
>
>
>
> ​AGI's bottleneck must be in *learning*, any​one who focuses on something
> else is barking under the wrong tree...
>
>
> Not just a bottleneck, it's the very definition of GI, the fitness /
> objective function of intelligence.
> Specifically, unsupervised / value-free learning, AKA pattern recognition. 
> Supervision
> and reinforcement are simple add-ons.
> Anything else is a distraction. "Problem solving" is meaningless: there is
> no such thing as "problem" in general, except as defined above.
>
>
> Now think about this:  we already have the weapon (deep learning) which is
> capable of learning *arbitrary* function mappings.
>
>
> Yeah, we have some random hammer, so every problem looks like a nail.
>
>
> We are facing a learning problem which we already know the formal
> definition of.
>
>
> No, you don't. There no constructive theory behind ANN, it's just a hack
> vaguely inspired by another hack: human brain.
> Which is a misshapen kludge, and this list makes it  perfectly obvious.
> Sorry, can't help it.
>
> Anyway, this is my alternative:  https://github.com/boris-kz/CogAlg
>
>
>
>
> On Mon, Jun 11, 2018 at 3:41 AM YKY via AGI <agi@agi.topicbox.com> wrote:
>
> On Mon, Jun 11, 2018 at 2:55 PM, MP via AGI <agi@agi.topicbox.com> wrote:
>
> Right. Most of them work off of a variant of depth first search which
> would usually lead to a combinatorial explosion, or some kind of heuristic
> to cut down on search time at some other cognitive expense...
>
> Not to mention most of them run off human made rules, rather than learning
> it for themselves through subjective experience.
>
> I highly doubt even Murray’s bizarre system subjectively learns. There are
> hand coded concepts in the beginning of his trippy source code.
>
> How can your system overcome this? How can it subjectively learn without
> human intervention?
>
>
>
> ​AGI's bottleneck must be in *learning*, any​one who focuses on something
> else is barking under the wrong tree...
>
> Now think about this:  we already have the weapon (deep learning) which is
> capable of learning *arbitrary* function mappings.  We are facing a
> learning problem which we already know the formal definition of.  So we
> just need to apply that weapon to the problem.  How hard can that be?
>
> Well, it turns out it's very hard to understand the abstract (algebraic)
> structure of logic, that took me a long time to master, but now I have a
> pretty clear view of its structure.
>
> Inductive learning in logic is done via some kind of depth-first search in
> the space of logic formulas, as you described.  The neural network can also
> perform a search in the weight space, maximizing some objective functions.
> So the weight space must somehow *correspond* to the space of logic
> formulas.
>
> In my proposal (that has just freshly failed), I encoded the formulas as
> the output of the neural network.  That is an example, albeit I neglected
> the first-order logic aspects.
>
> Does this answer your question?
>
> And thanks for asking, because that helps me to clarify my thinking as
> well... ☺
>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups> Permalink
> <https://agi.topicbox.com/groups/agi/T731509cdd81e3f5f-M39e088b2637b913ef7ec5cb5>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T731509cdd81e3f5f-M2d844ff72867523ea6ade9a3
Delivery options: https://agi.topicbox.com/groups

Reply via email to