Thank you for that introduction to mathematical symbolic interpretation.
The notation was familiar enough that I could follow what you were saying
but the explanations helped me to avoid getting confused. I found it to be
very interesting and I am thinking about it. As a matter of fact, I want to
work on a simpler AI type of project which would make the insights that you
must have very useful to me. Unfortunately, I am not sure I could find the
time to be of much use to you. If you ever decide to write a textbook about
this topic, I would be very interested. If you do write any additional
chapters I think you should make sure that you include a few worked
examples. If there is anyway I could help you without working full time on
the project, let me know.
Jim Bromer


On Sun, Feb 24, 2019 at 11:46 PM Linas Vepstas <linasveps...@gmail.com>
wrote:

> Attached is a PDF that reviews a relationship between symbols and reality.
> Depending on your outlook, you will probably find it to be either trivial,
> or wrong, or useless or all of the above. -- linas
>
> On Sun, Feb 24, 2019 at 2:32 PM Nanograte Knowledge Technologies <
> nano...@live.com> wrote:
>
>> You'll probably need both, not one vs the other. I'd think, if the same
>> soft architecture was used for a neural net as the symbol net, the symbol
>> net would eventually outperform the neural net, but only by virtue of the
>> data integrity of the neural net. The symbol net could be viewed as one
>> abstraction of a neural net. The symbol net could be integrated with
>> floating memory. This seems unlikely for a neural net to achieve on its
>> own.
>>
>> I do not have have all the "correct" terms here. I also try and stay away
>> from some terms on purpose, so as to not become trapped by them. To me some
>> of these ideas are functional concepts - a dichotomy.
>>
>> From a dynamical memory store it should be possible to deploy a
>> functional "RNA"-type coding schema as possibly the most-abstract
>> de-abstraction of the whole system. This is the domain of paradox I
>> referred to before. I'll call this paradox because time ceases to exist
>> somewhere along this point, before some of it, or all, or nothing,
>> fragment(s) into multiple versions of 1 entity. This is a point (or
>> universe) where an iterative spark of tacit knowing (read possible
>> consciousness) probably occurs.
>>
>> I think the previously-mentioned "location" may be an "area" from which
>> tacit knowledge might enter the timespace continuum before spinning off
>> explicit artifacts. This is because in my view, location has relevance for
>> gravitational force (if gravity would still exist as such in science after
>> another 10 years). I tend to agree how the process I described might be
>> pertinent to  an aspect of recombination, but one has to think
>> limitlessness. Recombination -as we seemingly know it biologically - seems
>> to be an insular concept. However, I think existentially it is a much
>> greater integral holistically than we may currently be able to imagine. At
>> this point, one would be given to much speculation.
>>
>> In my mind's eye, at a conceptual point of spark, I see an environmental
>> state of all and nothing and everything in between, finite infinity and so
>> on. Perhaps then, morphogenesis. I do not call this state "life". At best,
>> I'd refer to it as a superposition within informational communication.
>>
>> I'll need to put my theory to the test though. Hopefully, in my lifetime
>> then. :-)
>>
>> Robert Benjamin
>>
>> ------------------------------
>> *From:* Jim Bromer <jimbro...@gmail.com>
>> *Sent:* Sunday, 24 February 2019 7:25 PM
>> *To:* AGI
>> *Subject:* Re: [agi] Some thoughts about Symbols and Symbol Nets
>>
>> Nano: I thought that you might be thinking of something like that. As I
>> tried to form a response I finally have started to make sense of it. A
>> discrete mathematical algorithm will suffer from the combinatorial
>> complications. A deep learning net seems to be able to deal with those
>> kinds of problems - as long as there are good approximate solutions
>> available. So then, is there anyway a symbol-net could outperform neural
>> nets (if rapidity was not a fundamental problem)? It seems to me that you
>> may be approaching  a possible solution to that kind of situation. Does
>> that analysis make sense to you?
>> Jim Bromer
>>
>>
>> On Sat, Feb 23, 2019 at 3:39 PM Nanograte Knowledge Technologies <
>> nano...@live.com> wrote:
>>
>> <https://mathinsight.org/definition/network>In other words...the purpose
>> should be functional RNA.
>>
>> Now, is an AGI blueprint justified?
>>
>> https://en.wikipedia.org/wiki/Gene_regulatory_network
>> <https://en.wikipedia.org/wiki/Gene_regulatory_network>
>> Gene regulatory network - Wikipedia
>> <https://en.wikipedia.org/wiki/Gene_regulatory_network>
>> A gene (or genetic) regulatory network (GRN) is a collection of molecular
>> regulators that interact with each other and with other substances in the
>> cell to govern the gene expression levels of mRNA and proteins. These play
>> a central role in morphogenesis, the creation of body structures, which in
>> turn is central to evolutionary developmental biology (evo-devo).
>> en.wikipedia.org
>>
>>
>>
>>
>>
>> ------------------------------
>> *From:* Jim Bromer <jimbro...@gmail.com>
>> *Sent:* Saturday, 23 February 2019 6:31 PM
>> *To:* AGI
>> *Subject:* Re: [agi] Some thoughts about Symbols and Symbol Nets
>>
>> So now I am beginning to wonder if nano was talking about some kind of
>> syntactic rules of combination for symbols and that all knowledge would
>> have to 'emerge' by selection of which combination, within a network, were
>> selected by learning.
>> Jim Bromer
>>
>>
>> On Fri, Feb 22, 2019 at 1:54 AM Nanograte Knowledge Technologies <
>> nano...@live.com> wrote:
>>
>> If this were the case, I'd agree with you. What I'm proposing is content
>> independent and context dependent. It is suitable for CAS applications. It
>> is not "designed"to be constrained, but to identify and normalize a
>> primary, contextual constraint in order to resolve it in an adaptive sense.
>> Meaning, humans do not resolve it, but the contextually-bound instance of
>> the system does. By implication, all possible meanings of the symbol are
>> always resident and latent. However, the decisive meaning for a particular
>> context is alive for the duration of that contextual reference in the
>> greater schema of information. In other words, the correct answer is always
>> possible within a particular context. Such is the basis of critical
>> thinking, to derive the correct answer to every situation. Yes, there is an
>> underlying assumption, which is that a correct answer exists for every
>> context, but this could be proven scientifically.
>>
>> Previously, mention was made in the forum about hierarchy (meaning
>> control). Having hierarchy within a systems constructs provides a system
>> with robustness and integrity, which translates into informational
>> reliability. Now, it seems the question of validity has been settled, but
>> not the one on reliability. What I'm proposing already has embedded into
>> its language what could be termed validity and reliability, at scale.
>>
>> That is where the analogy of the tyre hitting the tar has relevance, or
>> the point in project and program management where the the essential truth
>> hits home. It is where the absolute impact on a situation has most effect.
>> We could also argue how it resembles the point of effective complexity,
>> which is the point-of-reasoning we are all desire within an AGI entity.
>>
>> You stated: "The term 'context-free' refers to the syntactic context, not
>> the greater global context (of variable type definitions or redefinitions
>> and so on)."
>>
>> >>I strongly disagree with this view. In a semantic system, which I
>> contend is required for a symbolic system to become operational, syntax
>> lends itself to context specificality. I think that point was born out via
>> recent discussions on the forum.
>>
>> I think no designer should (be allowed) arbitrarily decide local and
>> global boundaries. That's a recipe for disaster. Boundaries are outcomes of
>> the inherent (natural) design resident within components and collective
>> contexts. In addition to a specified, context boundary, the underlying
>> methodology should allow for scalability, which is not only an issue of
>> size, but also of adaptive scope (implying boundary adaptiveness). In this
>> sense, a contextual/systems boundary would be structured/unstructured in a
>> construct of thesis/antithesis - 2 parts of the same coin. Perhaps in using
>> this approach, we would achieve Haramein's et al's perspective on a
>> finite-infinity in a computational model.
>>
>> When looked at via a report, or a snapshot view, such a system would
>> appear to be structured (which it also is). However, if you could view it
>> as a continuous value stream, as a movie, it would be possible to watch
>> (and trace) how it morphed relative to its adaptive algorithm - as an
>> unstructured system. In time, for each specific context, it should become
>> possible to identify the patterns of such morphing, and apply predictive
>> algorithms.
>>
>> I think one outcome (there are multiple outcomes) of such a system would
>> resemble a Symbol Net. It should theoretically be possible to extract such
>> nets from the "live" system. I think this is rather similar to how we do it
>> within society today.
>>
>>  Robert Benjamin
>> ------------------------------
>> *From:* Jim Bromer <jimbro...@gmail.com>
>> *Sent:* Thursday, 21 February 2019 11:46 PM
>> *To:* AGI
>> *Subject:* Re: [agi] Some thoughts about Symbols and Symbol Nets
>>
>> A contextual reference framework, designed to limit the meaning of a
>> symbol to one meaning within a particular context, would only displace the
>> ambiguity - unless the language was artificially designed to be that way.
>> So called 'context-free' languages, ironically enough, do just that. They
>> have some value in AI, but it is difficult to see how it could be used as
>> an effective basis for stronger AI. The term 'context-free' refers to the
>> syntactic context, not the greater global context (of variable type
>> definitions or redefinitions and so on). Perhaps the term is misunderstood
>> or misused in compiler design, but, a lot like applied logic, its
>> application is useful because it can be limited to 'a framework' (like a
>> local function and so on). So perhaps industry did develop a way to limit
>> ambiguity within a contextual framework, but so far it has not proven to be
>> very useful in stronger AI. The nature of *limiting* ambiguity of a symbol
>> (or possible referential signification) does not seem to be a very powerful
>> tool to rely on when you are trying to stretch the reach of current (or 30
>> year old) ideas to attain greater powers of 'understanding'.
>> Jim Bromer
>>
>>
>> On Thu, Feb 21, 2019 at 2:49 PM Nanograte Knowledge Technologies <
>> nano...@live.com> wrote:
>>
>> If one had a contextual reference framework, each symbol would always
>> have one meaning within a particular context. Searches would always be
>> optimal. An example of this is evidenced within the Japanese language. So,
>> the 30+ years of waiting was for no good reason. If only the industry had
>> developed appropriate theory for dealing with scalable ambiguity, which it
>> probably had.
>>
>> ------------------------------
>> *From:* Jim Bromer <jimbro...@gmail.com>
>> *Sent:* Thursday, 21 February 2019 8:13 PM
>> *To:* AGI
>> *Subject:* Re: [agi] Some thoughts about Symbols and Symbol Nets
>>
>> I asked myself the question: If a theory of symbols was a feasible basis
>> for stronger AI, then the earlier efforts in discrete AI or weighted
>> reasoning should have show some promise. They should have worked. So why
>> didn't they work? Then I remembered that they did work with small data
>> sets. GOFAI did work as long as it could make a rapid search through the
>> possible candidates of meaning, but because combinations of symbols have
>> meaning, and because each symbol may have more than one meaning or referent
>> the problems of combinatorial complications presented a major obstacle to
>> developing the theories much further, My opinion is that the ambiguities or
>> multiple possible relevancies of a symbol (sub-net) can themselves be used
>> to narrow the possible meaning of the symbol (sub-net) when needed in
>> reasoning. We just need a huge amount of memory in order to create an index
>> of generalizations to use the information adequately. We now have that
>> scale of memory and processor speed available to us so we can try things
>> that could not be tried in the 1970s and 80s.
>> Jim Bromer
>>
>>
>> On Tue, Feb 19, 2019 at 12:45 AM Nanograte Knowledge Technologies <
>> nano...@live.com> wrote:
>>
>> Linas, Mike and Jim
>>
>> I find this to be a most-interesting conversation. Primarily, because it
>> suggests that the development of AGI may not only be challenged by the
>> development of competent theory, but also by programming capabilities to
>> put the theory into practice.
>>
>> Evolving such an architecture then, should desired outcomes be for an AGI
>> entity to achieve self-theory and self-programming? In its most-simplistic
>> from, a symbol is but a suitable abstraction of a greater reality,
>> similarly to how a symbol of a red-heart might be an abstraction of a
>> sentient being. Concept? Context? Meaning? Transaction.
>>
>> Who, or what decides what the symbolic world should look like and its
>> meaningfulness? The global state of social evolution may cause terrible
>> confusion in any learning entity. The learning objectives should be
>> specific, not generalized. Isn't learning incorrectly worse than not
>> learning at all?
>>
>> I think, there should be a general agi-architecture, replete with the
>> capacity to develop and function within a generic world view. Furthermore,
>> I think the real value would be derived from specialized AGI. Maybe beyond
>> that, an AGI architecture would - in future - morph via its own social
>> networking and inherent capabilities to become more than the sum of its
>> parts.
>>
>> To do so, would take a lot more than intersections. I agree with the
>> statements made about binary/vector theory, but it seems obvious to me that
>> this would not be sufficient for this task. You implied fractals. To my
>> mind, that would be the only way to proceed. As such, I think the primary
>> issue remains a design issue.
>>
>> Robert Benjamin
>>
>> ------------------------------
>> *From:* Linas Vepstas <linasveps...@gmail.com>
>> *Sent:* Monday, 18 February 2019 10:36 PM
>> *To:* AGI
>> *Subject:* Re: [agi] Some thoughts about Symbols and Symbol Nets
>>
>>
>>
>> On Mon, Feb 18, 2019 at 1:17 PM Mike Archbold <jazzbo...@gmail.com>
>> wrote:
>>
>> I'm not sure I completely follow your point, but I sort of get it.
>>
>> I tend to think of symbols as one type of the "AI stuff" a computer
>> uses to think with -- the other main type of "AI stuff" being neural
>> networks. These have analogies to the "mind stuff" we use to think
>> with.
>>
>>
>> Symbol systems and neural-net systems can be seen to be variants of the
>> same thing; two sides of the same coin. I posted an earlier thread on this.
>> There's a 50-page long PDF with math, here:
>> https://github.com/opencog/opencog/raw/master/opencog/nlp/learn/learn-lang-diary/skippy.pdf
>>
>> roughly: both form networks. They differ primarily in how they represent
>> the networks, and how they assign weights to network connections (and how
>> they update weights on network connections).
>>
>>
>> On their own, symbols don't mean anything, of course, and inherently
>> don't contain "understanding" in any definition of understanding.
>>
>> Is there a broad theory of symbols? We kind of proceed with loose
>> definitions. I remember reading the Newell and Simon works, and they
>> say AI strictly in terms of symbols and LISP (as I recall anyway).
>>
>>
>> Yes. The "broad theory of symbols" is called "model theory" by
>> mathematicians. It's highly technical and arcane. It's most prominent
>> distinguishing feature as that everything is binary:  it is or it ain't.
>> Something is true, or false.  A formula takes values, or there is no such
>> formula. A relation binds two things together, or there is no relation.
>> There's no blurry middle-ground.
>>
>> So, conventionally, networks of symbols, and the relations between them,
>> and the formulas transforming them -- these form a network, a graph, and
>> everything on that network/graph is a zero or a one -- an edge exists
>> between two nodes, or it doesn't.
>>
>> The obvious generalization is to make these fractional, to assign
>> weights. Neural nets do this. But neural nets do something else, that they
>> probably should not: they jam everything into vectors (or tensors) This is
>> kind-of OK, because the algebra of a graph is a lot like the algebra of a
>> vector space, and the confusion between the two is an excusable mistake: it
>> takes some sophistication to realize that they are only similar, but not
>> the same.
>>
>> I claim: fix both these things, and you've got a winner.  Use symbolic
>> systems, but use fractional values, not 0/1 relations.  Find a good way of
>> updating the weights. So, deep-learning is a very effective weight-update
>> algorithm. But there are other ways of updating weights too (that are
>> probably just as good or better.  Next, clarify the
>> vector-space-vs-graph-algebra issue, and then you can clearly articulate
>> how to update weights on symbolic systems, as well.
>>
>> (Quickly explained: probabilities are not rotationally-symmetric under
>> the rotation group SO(N) whereas most neural-net vectors are: this is the
>> spot where deep-learning "gets it wrong": it incorrectly mixes gibbs
>> training functions with rotational symmetry.)
>>
>> So Jim is right: discarding symbolic systems in favor of neural nets is a
>> mistake; the path forward is at the intersection of the two: a net of
>> symbols, a net with weights, a net with gradient-descent properties, a net
>> with probabilities and probability update formulas.
>>
>> -- Linas
>>
>>
>> On 2/18/19, Jim Bromer <jimbro...@gmail.com> wrote:
>> > Since I realized that the discrete vs weighted arguments are passe I
>> > decided that thinking about symbol nets might be a better direction for
>> me,
>> >
>> > 1. A symbol may be an abstracted 'image' of a (relatively) lower level
>> > object or system.
>> >   An image may consist of a feature of the referent, it may be an icon
>> of
>> > the referent or it may be a compressed form of the referent.
>> > 2. A symbol may be more like a 'label' for some object or system.
>> > 3. A generalization may be represented as an image of what is being
>> > generalized but it also may be more of a label.
>> > 4. An 'image', as I am using the term, may be derived from a part or
>> > feature of an object or from a part of a system but it may be used to
>> refer
>> > to the object or system.
>> > 5. An image or label may be used to represent a greater system. A system
>> > may take on different appearances from different vantage points, and
>> > analogously, some features of interest may be relevant in one context
>> but
>> > not from another context. A symbol may be correlated with some other
>> > 'object' and may stand as a referent to it in some contexts.
>> >
>> > So, while some symbols may be applied to or projected onto a 'lower'
>> corpus
>> > of data, others would need to use an image to project onto the data
>> field.
>> > I use the term, 'lower' somewhat ambiguously, because I think it is
>> useful
>> > to symbolize a system of symbols so a 'higher' abstraction of a system
>> > might also be used at the same level. And it seems that a label would
>> have
>> > to be associated with some images if it was to be projected against the
>> > data.
>> >
>> > One other thing. This idea of projecting a symbol image onto some data,
>> in
>> > order to compare the image with some features of the data, seems like it
>> > has fallen out of favor with the advancements of dlnns and other kinds
>> of
>> > neural nets. Projection seems like such a fundamental process that I
>> cannot
>> > see why it should be discarded just because it would be relatively slow
>> > when used with symbol nets. And, there are exceptions, GPUs, for
>> example,
>> > love projecting one image onto another.
>> > Jim Bromer
>> 
>> 
>> --
>> cassette tapes - analog TV - film cameras - you
>> 
>
> --
> cassette tapes - analog TV - film cameras - you
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/Tcc0e554e7141c02f-M79ada4efaa109badec01cb90>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tcc0e554e7141c02f-Md1802f370b34424b1ae3d250
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to