"Such is the basis of critical thinking, to derive the correct answer to
every situation."

No, that is not the basis of critical thinking. Critical thinking refers
to, in the most literal sense, a critical attitude, as if someone was
making a series of criticisms - much like some of us often do in these
groups. The Foundation for Critical Thinking, however, provides the
quote, "Critical
thinking is a desire to seek, patience to doubt, fondness to meditate,
slowness to assert, readiness to consider, carefulness to dispose and set
in order; and hatred for every kind of imposture" ~ Francis Bacon (1605)
which is not what some of us typically do in these groups. The suggestion
that the basis of critical thinking is to derive the *correct *answer to
*every* situation, is a highly exaggerated statement. Or, at least it would
be for me. I am lucky to derive a *ballpark* answer to *almost any*
situation. From my perspective, we are not playing the same game. I  should
not write this kind of crap. I should try to understand the ideas that you
were trying to describe and leave the personal attack out. But I really do
not understand what you are talking about.
Jim Bromer


On Fri, Feb 22, 2019 at 1:54 AM Nanograte Knowledge Technologies <
nano...@live.com> wrote:

> If this were the case, I'd agree with you. What I'm proposing is content
> independent and context dependent. It is suitable for CAS applications. It
> is not "designed"to be constrained, but to identify and normalize a
> primary, contextual constraint in order to resolve it in an adaptive sense.
> Meaning, humans do not resolve it, but the contextually-bound instance of
> the system does. By implication, all possible meanings of the symbol are
> always resident and latent. However, the decisive meaning for a particular
> context is alive for the duration of that contextual reference in the
> greater schema of information. In other words, the correct answer is always
> possible within a particular context. Such is the basis of critical
> thinking, to derive the correct answer to every situation. Yes, there is an
> underlying assumption, which is that a correct answer exists for every
> context, but this could be proven scientifically.
>
> Previously, mention was made in the forum about hierarchy (meaning
> control). Having hierarchy within a systems constructs provides a system
> with robustness and integrity, which translates into informational
> reliability. Now, it seems the question of validity has been settled, but
> not the one on reliability. What I'm proposing already has embedded into
> its language what could be termed validity and reliability, at scale.
>
> That is where the analogy of the tyre hitting the tar has relevance, or
> the point in project and program management where the the essential truth
> hits home. It is where the absolute impact on a situation has most effect.
> We could also argue how it resembles the point of effective complexity,
> which is the point-of-reasoning we are all desire within an AGI entity.
>
> You stated: "The term 'context-free' refers to the syntactic context, not
> the greater global context (of variable type definitions or redefinitions
> and so on)."
>
> >>I strongly disagree with this view. In a semantic system, which I
> contend is required for a symbolic system to become operational, syntax
> lends itself to context specificality. I think that point was born out via
> recent discussions on the forum.
>
> I think no designer should (be allowed) arbitrarily decide local and
> global boundaries. That's a recipe for disaster. Boundaries are outcomes of
> the inherent (natural) design resident within components and collective
> contexts. In addition to a specified, context boundary, the underlying
> methodology should allow for scalability, which is not only an issue of
> size, but also of adaptive scope (implying boundary adaptiveness). In this
> sense, a contextual/systems boundary would be structured/unstructured in a
> construct of thesis/antithesis - 2 parts of the same coin. Perhaps in using
> this approach, we would achieve Haramein's et al's perspective on a
> finite-infinity in a computational model.
>
> When looked at via a report, or a snapshot view, such a system would
> appear to be structured (which it also is). However, if you could view it
> as a continuous value stream, as a movie, it would be possible to watch
> (and trace) how it morphed relative to its adaptive algorithm - as an
> unstructured system. In time, for each specific context, it should become
> possible to identify the patterns of such morphing, and apply predictive
> algorithms.
>
> I think one outcome (there are multiple outcomes) of such a system would
> resemble a Symbol Net. It should theoretically be possible to extract such
> nets from the "live" system. I think this is rather similar to how we do it
> within society today.
>
>  Robert Benjamin
> ------------------------------
> *From:* Jim Bromer <jimbro...@gmail.com>
> *Sent:* Thursday, 21 February 2019 11:46 PM
> *To:* AGI
> *Subject:* Re: [agi] Some thoughts about Symbols and Symbol Nets
>
> A contextual reference framework, designed to limit the meaning of a
> symbol to one meaning within a particular context, would only displace the
> ambiguity - unless the language was artificially designed to be that way.
> So called 'context-free' languages, ironically enough, do just that. They
> have some value in AI, but it is difficult to see how it could be used as
> an effective basis for stronger AI. The term 'context-free' refers to the
> syntactic context, not the greater global context (of variable type
> definitions or redefinitions and so on). Perhaps the term is misunderstood
> or misused in compiler design, but, a lot like applied logic, its
> application is useful because it can be limited to 'a framework' (like a
> local function and so on). So perhaps industry did develop a way to limit
> ambiguity within a contextual framework, but so far it has not proven to be
> very useful in stronger AI. The nature of *limiting* ambiguity of a symbol
> (or possible referential signification) does not seem to be a very powerful
> tool to rely on when you are trying to stretch the reach of current (or 30
> year old) ideas to attain greater powers of 'understanding'.
> Jim Bromer
>
>
> On Thu, Feb 21, 2019 at 2:49 PM Nanograte Knowledge Technologies <
> nano...@live.com> wrote:
>
> If one had a contextual reference framework, each symbol would always have
> one meaning within a particular context. Searches would always be optimal.
> An example of this is evidenced within the Japanese language. So, the 30+
> years of waiting was for no good reason. If only the industry had developed
> appropriate theory for dealing with scalable ambiguity, which it probably
> had.
>
> ------------------------------
> *From:* Jim Bromer <jimbro...@gmail.com>
> *Sent:* Thursday, 21 February 2019 8:13 PM
> *To:* AGI
> *Subject:* Re: [agi] Some thoughts about Symbols and Symbol Nets
>
> I asked myself the question: If a theory of symbols was a feasible basis
> for stronger AI, then the earlier efforts in discrete AI or weighted
> reasoning should have show some promise. They should have worked. So why
> didn't they work? Then I remembered that they did work with small data
> sets. GOFAI did work as long as it could make a rapid search through the
> possible candidates of meaning, but because combinations of symbols have
> meaning, and because each symbol may have more than one meaning or referent
> the problems of combinatorial complications presented a major obstacle to
> developing the theories much further, My opinion is that the ambiguities or
> multiple possible relevancies of a symbol (sub-net) can themselves be used
> to narrow the possible meaning of the symbol (sub-net) when needed in
> reasoning. We just need a huge amount of memory in order to create an index
> of generalizations to use the information adequately. We now have that
> scale of memory and processor speed available to us so we can try things
> that could not be tried in the 1970s and 80s.
> Jim Bromer
>
>
> On Tue, Feb 19, 2019 at 12:45 AM Nanograte Knowledge Technologies <
> nano...@live.com> wrote:
>
> Linas, Mike and Jim
>
> I find this to be a most-interesting conversation. Primarily, because it
> suggests that the development of AGI may not only be challenged by the
> development of competent theory, but also by programming capabilities to
> put the theory into practice.
>
> Evolving such an architecture then, should desired outcomes be for an AGI
> entity to achieve self-theory and self-programming? In its most-simplistic
> from, a symbol is but a suitable abstraction of a greater reality,
> similarly to how a symbol of a red-heart might be an abstraction of a
> sentient being. Concept? Context? Meaning? Transaction.
>
> Who, or what decides what the symbolic world should look like and its
> meaningfulness? The global state of social evolution may cause terrible
> confusion in any learning entity. The learning objectives should be
> specific, not generalized. Isn't learning incorrectly worse than not
> learning at all?
>
> I think, there should be a general agi-architecture, replete with the
> capacity to develop and function within a generic world view. Furthermore,
> I think the real value would be derived from specialized AGI. Maybe beyond
> that, an AGI architecture would - in future - morph via its own social
> networking and inherent capabilities to become more than the sum of its
> parts.
>
> To do so, would take a lot more than intersections. I agree with the
> statements made about binary/vector theory, but it seems obvious to me that
> this would not be sufficient for this task. You implied fractals. To my
> mind, that would be the only way to proceed. As such, I think the primary
> issue remains a design issue.
>
> Robert Benjamin
>
> ------------------------------
> *From:* Linas Vepstas <linasveps...@gmail.com>
> *Sent:* Monday, 18 February 2019 10:36 PM
> *To:* AGI
> *Subject:* Re: [agi] Some thoughts about Symbols and Symbol Nets
>
>
>
> On Mon, Feb 18, 2019 at 1:17 PM Mike Archbold <jazzbo...@gmail.com> wrote:
>
> I'm not sure I completely follow your point, but I sort of get it.
>
> I tend to think of symbols as one type of the "AI stuff" a computer
> uses to think with -- the other main type of "AI stuff" being neural
> networks. These have analogies to the "mind stuff" we use to think
> with.
>
>
> Symbol systems and neural-net systems can be seen to be variants of the
> same thing; two sides of the same coin. I posted an earlier thread on this.
> There's a 50-page long PDF with math, here:
> https://github.com/opencog/opencog/raw/master/opencog/nlp/learn/learn-lang-diary/skippy.pdf
>
> roughly: both form networks. They differ primarily in how they represent
> the networks, and how they assign weights to network connections (and how
> they update weights on network connections).
>
>
> On their own, symbols don't mean anything, of course, and inherently
> don't contain "understanding" in any definition of understanding.
>
> Is there a broad theory of symbols? We kind of proceed with loose
> definitions. I remember reading the Newell and Simon works, and they
> say AI strictly in terms of symbols and LISP (as I recall anyway).
>
>
> Yes. The "broad theory of symbols" is called "model theory" by
> mathematicians. It's highly technical and arcane. It's most prominent
> distinguishing feature as that everything is binary:  it is or it ain't.
> Something is true, or false.  A formula takes values, or there is no such
> formula. A relation binds two things together, or there is no relation.
> There's no blurry middle-ground.
>
> So, conventionally, networks of symbols, and the relations between them,
> and the formulas transforming them -- these form a network, a graph, and
> everything on that network/graph is a zero or a one -- an edge exists
> between two nodes, or it doesn't.
>
> The obvious generalization is to make these fractional, to assign weights.
> Neural nets do this. But neural nets do something else, that they probably
> should not: they jam everything into vectors (or tensors) This is kind-of
> OK, because the algebra of a graph is a lot like the algebra of a vector
> space, and the confusion between the two is an excusable mistake: it takes
> some sophistication to realize that they are only similar, but not the same.
>
> I claim: fix both these things, and you've got a winner.  Use symbolic
> systems, but use fractional values, not 0/1 relations.  Find a good way of
> updating the weights. So, deep-learning is a very effective weight-update
> algorithm. But there are other ways of updating weights too (that are
> probably just as good or better.  Next, clarify the
> vector-space-vs-graph-algebra issue, and then you can clearly articulate
> how to update weights on symbolic systems, as well.
>
> (Quickly explained: probabilities are not rotationally-symmetric under the
> rotation group SO(N) whereas most neural-net vectors are: this is the spot
> where deep-learning "gets it wrong": it incorrectly mixes gibbs training
> functions with rotational symmetry.)
>
> So Jim is right: discarding symbolic systems in favor of neural nets is a
> mistake; the path forward is at the intersection of the two: a net of
> symbols, a net with weights, a net with gradient-descent properties, a net
> with probabilities and probability update formulas.
>
> -- Linas
>
>
> On 2/18/19, Jim Bromer <jimbro...@gmail.com> wrote:
> > Since I realized that the discrete vs weighted arguments are passe I
> > decided that thinking about symbol nets might be a better direction for
> me,
> >
> > 1. A symbol may be an abstracted 'image' of a (relatively) lower level
> > object or system.
> >   An image may consist of a feature of the referent, it may be an icon of
> > the referent or it may be a compressed form of the referent.
> > 2. A symbol may be more like a 'label' for some object or system.
> > 3. A generalization may be represented as an image of what is being
> > generalized but it also may be more of a label.
> > 4. An 'image', as I am using the term, may be derived from a part or
> > feature of an object or from a part of a system but it may be used to
> refer
> > to the object or system.
> > 5. An image or label may be used to represent a greater system. A system
> > may take on different appearances from different vantage points, and
> > analogously, some features of interest may be relevant in one context but
> > not from another context. A symbol may be correlated with some other
> > 'object' and may stand as a referent to it in some contexts.
> >
> > So, while some symbols may be applied to or projected onto a 'lower'
> corpus
> > of data, others would need to use an image to project onto the data
> field.
> > I use the term, 'lower' somewhat ambiguously, because I think it is
> useful
> > to symbolize a system of symbols so a 'higher' abstraction of a system
> > might also be used at the same level. And it seems that a label would
> have
> > to be associated with some images if it was to be projected against the
> > data.
> >
> > One other thing. This idea of projecting a symbol image onto some data,
> in
> > order to compare the image with some features of the data, seems like it
> > has fallen out of favor with the advancements of dlnns and other kinds of
> > neural nets. Projection seems like such a fundamental process that I
> cannot
> > see why it should be discarded just because it would be relatively slow
> > when used with symbol nets. And, there are exceptions, GPUs, for example,
> > love projecting one image onto another.
> > Jim Bromer
> 
> 
> --
> cassette tapes - analog TV - film cameras - you
> 
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/Tcc0e554e7141c02f-Mfcf934eebb3ea320702099a1>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tcc0e554e7141c02f-M3e4444fa5018eeb0d1eaaf58
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to