I mentioned that symbols can act as types (and they can be used as type
symbols.)  Symbols possess features, or they acquire features in use. These
features may not be specifically defined, but on the other hand, they can
be. But the roles symbols play are profuse. These roles will add to the
ambiguity because a single symbol may take on many roles, and they will
certainly become infused with the roles that a symbol subnet will play, but
my opinion is that these roles can be used to solve the ambiguities of the
symbol net as well. Most categories of symbols are framed around a family
of characteristics, where all the members share some of the familial
characteristics. This makes pinning the meaning of a symbol or a symbol
subnet complicated, but because the roles that the symbols play will exist
at different abstract 'levels', the meaning of the symbol net should be
resolvable, as long as the particular use has been previously studied. I am
talking about concepts, but the idea of a symbolic reference net might be a
little more of an objective description.
Jim Bromer


On Mon, Feb 18, 2019 at 10:22 AM Jim Bromer <jimbro...@gmail.com> wrote:

> Since I realized that the discrete vs weighted arguments are passe I
> decided that thinking about symbol nets might be a better direction for me,
>
> 1. A symbol may be an abstracted 'image' of a (relatively) lower level
> object or system.
>   An image may consist of a feature of the referent, it may be an icon of
> the referent or it may be a compressed form of the referent.
> 2. A symbol may be more like a 'label' for some object or system.
> 3. A generalization may be represented as an image of what is being
> generalized but it also may be more of a label.
> 4. An 'image', as I am using the term, may be derived from a part or
> feature of an object or from a part of a system but it may be used to refer
> to the object or system.
> 5. An image or label may be used to represent a greater system. A system
> may take on different appearances from different vantage points, and
> analogously, some features of interest may be relevant in one context but
> not from another context. A symbol may be correlated with some other
> 'object' and may stand as a referent to it in some contexts.
>
> So, while some symbols may be applied to or projected onto a 'lower'
> corpus of data, others would need to use an image to project onto the data
> field. I use the term, 'lower' somewhat ambiguously, because I think it is
> useful to symbolize a system of symbols so a 'higher' abstraction of a
> system might also be used at the same level. And it seems that a label
> would have to be associated with some images if it was to be projected
> against the data.
>
> One other thing. This idea of projecting a symbol image onto some data, in
> order to compare the image with some features of the data, seems like it
> has fallen out of favor with the advancements of dlnns and other kinds of
> neural nets. Projection seems like such a fundamental process that I cannot
> see why it should be discarded just because it would be relatively slow
> when used with symbol nets. And, there are exceptions, GPUs, for example,
> love projecting one image onto another.
> Jim Bromer
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tcc0e554e7141c02f-M4f4cbf22504051c3d4337203
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to