I asked myself the question: If a theory of symbols was a feasible basis
for stronger AI, then the earlier efforts in discrete AI or weighted
reasoning should have show some promise. They should have worked. So why
didn't they work? Then I remembered that they did work with small data
sets. GOFAI did work as long as it could make a rapid search through the
possible candidates of meaning, but because combinations of symbols have
meaning, and because each symbol may have more than one meaning or referent
the problems of combinatorial complications presented a major obstacle to
developing the theories much further, My opinion is that the ambiguities or
multiple possible relevancies of a symbol (sub-net) can themselves be used
to narrow the possible meaning of the symbol (sub-net) when needed in
reasoning. We just need a huge amount of memory in order to create an index
of generalizations to use the information adequately. We now have that
scale of memory and processor speed available to us so we can try things
that could not be tried in the 1970s and 80s.
Jim Bromer


On Tue, Feb 19, 2019 at 12:45 AM Nanograte Knowledge Technologies <
nano...@live.com> wrote:

> Linas, Mike and Jim
>
> I find this to be a most-interesting conversation. Primarily, because it
> suggests that the development of AGI may not only be challenged by the
> development of competent theory, but also by programming capabilities to
> put the theory into practice.
>
> Evolving such an architecture then, should desired outcomes be for an AGI
> entity to achieve self-theory and self-programming? In its most-simplistic
> from, a symbol is but a suitable abstraction of a greater reality,
> similarly to how a symbol of a red-heart might be an abstraction of a
> sentient being. Concept? Context? Meaning? Transaction.
>
> Who, or what decides what the symbolic world should look like and its
> meaningfulness? The global state of social evolution may cause terrible
> confusion in any learning entity. The learning objectives should be
> specific, not generalized. Isn't learning incorrectly worse than not
> learning at all?
>
> I think, there should be a general agi-architecture, replete with the
> capacity to develop and function within a generic world view. Furthermore,
> I think the real value would be derived from specialized AGI. Maybe beyond
> that, an AGI architecture would - in future - morph via its own social
> networking and inherent capabilities to become more than the sum of its
> parts.
>
> To do so, would take a lot more than intersections. I agree with the
> statements made about binary/vector theory, but it seems obvious to me that
> this would not be sufficient for this task. You implied fractals. To my
> mind, that would be the only way to proceed. As such, I think the primary
> issue remains a design issue.
>
> Robert Benjamin
>
> ------------------------------
> *From:* Linas Vepstas <linasveps...@gmail.com>
> *Sent:* Monday, 18 February 2019 10:36 PM
> *To:* AGI
> *Subject:* Re: [agi] Some thoughts about Symbols and Symbol Nets
>
>
>
> On Mon, Feb 18, 2019 at 1:17 PM Mike Archbold <jazzbo...@gmail.com> wrote:
>
> I'm not sure I completely follow your point, but I sort of get it.
>
> I tend to think of symbols as one type of the "AI stuff" a computer
> uses to think with -- the other main type of "AI stuff" being neural
> networks. These have analogies to the "mind stuff" we use to think
> with.
>
>
> Symbol systems and neural-net systems can be seen to be variants of the
> same thing; two sides of the same coin. I posted an earlier thread on this.
> There's a 50-page long PDF with math, here:
> https://github.com/opencog/opencog/raw/master/opencog/nlp/learn/learn-lang-diary/skippy.pdf
>
> roughly: both form networks. They differ primarily in how they represent
> the networks, and how they assign weights to network connections (and how
> they update weights on network connections).
>
>
> On their own, symbols don't mean anything, of course, and inherently
> don't contain "understanding" in any definition of understanding.
>
> Is there a broad theory of symbols? We kind of proceed with loose
> definitions. I remember reading the Newell and Simon works, and they
> say AI strictly in terms of symbols and LISP (as I recall anyway).
>
>
> Yes. The "broad theory of symbols" is called "model theory" by
> mathematicians. It's highly technical and arcane. It's most prominent
> distinguishing feature as that everything is binary:  it is or it ain't.
> Something is true, or false.  A formula takes values, or there is no such
> formula. A relation binds two things together, or there is no relation.
> There's no blurry middle-ground.
>
> So, conventionally, networks of symbols, and the relations between them,
> and the formulas transforming them -- these form a network, a graph, and
> everything on that network/graph is a zero or a one -- an edge exists
> between two nodes, or it doesn't.
>
> The obvious generalization is to make these fractional, to assign weights.
> Neural nets do this. But neural nets do something else, that they probably
> should not: they jam everything into vectors (or tensors) This is kind-of
> OK, because the algebra of a graph is a lot like the algebra of a vector
> space, and the confusion between the two is an excusable mistake: it takes
> some sophistication to realize that they are only similar, but not the same.
>
> I claim: fix both these things, and you've got a winner.  Use symbolic
> systems, but use fractional values, not 0/1 relations.  Find a good way of
> updating the weights. So, deep-learning is a very effective weight-update
> algorithm. But there are other ways of updating weights too (that are
> probably just as good or better.  Next, clarify the
> vector-space-vs-graph-algebra issue, and then you can clearly articulate
> how to update weights on symbolic systems, as well.
>
> (Quickly explained: probabilities are not rotationally-symmetric under the
> rotation group SO(N) whereas most neural-net vectors are: this is the spot
> where deep-learning "gets it wrong": it incorrectly mixes gibbs training
> functions with rotational symmetry.)
>
> So Jim is right: discarding symbolic systems in favor of neural nets is a
> mistake; the path forward is at the intersection of the two: a net of
> symbols, a net with weights, a net with gradient-descent properties, a net
> with probabilities and probability update formulas.
>
> -- Linas
>
>
> On 2/18/19, Jim Bromer <jimbro...@gmail.com> wrote:
> > Since I realized that the discrete vs weighted arguments are passe I
> > decided that thinking about symbol nets might be a better direction for
> me,
> >
> > 1. A symbol may be an abstracted 'image' of a (relatively) lower level
> > object or system.
> >   An image may consist of a feature of the referent, it may be an icon of
> > the referent or it may be a compressed form of the referent.
> > 2. A symbol may be more like a 'label' for some object or system.
> > 3. A generalization may be represented as an image of what is being
> > generalized but it also may be more of a label.
> > 4. An 'image', as I am using the term, may be derived from a part or
> > feature of an object or from a part of a system but it may be used to
> refer
> > to the object or system.
> > 5. An image or label may be used to represent a greater system. A system
> > may take on different appearances from different vantage points, and
> > analogously, some features of interest may be relevant in one context but
> > not from another context. A symbol may be correlated with some other
> > 'object' and may stand as a referent to it in some contexts.
> >
> > So, while some symbols may be applied to or projected onto a 'lower'
> corpus
> > of data, others would need to use an image to project onto the data
> field.
> > I use the term, 'lower' somewhat ambiguously, because I think it is
> useful
> > to symbolize a system of symbols so a 'higher' abstraction of a system
> > might also be used at the same level. And it seems that a label would
> have
> > to be associated with some images if it was to be projected against the
> > data.
> >
> > One other thing. This idea of projecting a symbol image onto some data,
> in
> > order to compare the image with some features of the data, seems like it
> > has fallen out of favor with the advancements of dlnns and other kinds of
> > neural nets. Projection seems like such a fundamental process that I
> cannot
> > see why it should be discarded just because it would be relatively slow
> > when used with symbol nets. And, there are exceptions, GPUs, for example,
> > love projecting one image onto another.
> > Jim Bromer
> 
> 
> --
> cassette tapes - analog TV - film cameras - you
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/Tcc0e554e7141c02f-Mdda735b78b07613578c222a6>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tcc0e554e7141c02f-Mdcf77c952fa7674d2007804b
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to