> Do I expect the middle layers of AGI to look like something an insurance
company or zoning commission might write? Sure, why not?  I expect a
Chinese Room hard at work, in the middle layers of the AGI.

Isn't it almost certain that in the middle of our brains, a Chinese Room is
hard at work? Who expects middle-tier executive neurons to be "sentient"?

(Middle-level neurons are those who harass lower neurons and are
performance-evaluated by higher-level neurons on a weekly basis.)


On Wed, 20 Feb 2019 at 01:19, Nanograte Knowledge Technologies <
[email protected]> wrote:

> Linas
>
> Some clarity and thoughts...
>
> ------------------------------
> *From:* Linas Vepstas <[email protected]>
> *Sent:* Tuesday, 19 February 2019 9:06 AM
> *To:* AGI
> *Subject:* Re: [agi] Some thoughts about Symbols and Symbol Nets
>
> Hi Robert,
>
> I'm waiting for unit tests to pass, and it's like watching paint dry.  So
> I write spurious emails as I wait. Spurious response follows.
>
> On Mon, Feb 18, 2019 at 11:45 PM Nanograte Knowledge Technologies <
> [email protected]> wrote:
>
> Linas, Mike and Jim
>
> I find this to be a most-interesting conversation. Primarily, because it
> suggests that the development of AGI may not only be challenged by the
> development of competent theory, but also by programming capabilities to
> put the theory into practice.
>
>
> Yes. The people who know some of the theory usually don't know how to
> program, and v.v. and getting both to meet up is hard. That, plus the fact
> that there are 1001 theories, and there is very little (almost no)
> consensus about the right approach.
>
>
>
> Evolving such an architecture then, should desired outcomes be for an AGI
> entity to achieve self-theory and self-programming?
>
>
> Uh, yes? This question seems to be phrased awkwardly.  AGI isn't some
> thing that is just like a self-driving car, but only a tiny bit smarter....
>
> As a human, I have a self-theory. Parts of it are excellent: I really do
> know where my hands are, and what they are doing. Parts of it are terrible:
> I really don't know much about the vascularization of my lower legs, or how
> a black-and-blue spot appeared there.   But hey, self-driving cars have a
> very good idea of what is on the road in front of them, and have no idea at
> all about the chemistry of rubber.  Self-driving cars have a self-model.
> Which is maybe less than a self-theory?
>
> >> What I meant by self-theory was the ability to form a hypothesis and
> evolve a theory and test for such a hypothesis, all the while spinning off
> the learning as computational functions of programming. I think I have a
> very-good idea of what from an agi-service would take. Agreed, it;s not a
> smart machine. In my vision, it's a species.
>
> As a human, do I engage in self-programming? Sure. I make new-years
> resolutions every day. I even keep some of them.  Self-driving cars don't,
> for obvious liability reasons.
>
> >> Self-programming would be an ability to code programs on demand, or via
> threshold triggers, and so on. Perhaps, as an evolutionary step.
>
> Do I expect an AGI to be equally self-aware, and equally in self-control?
> Yes.
>
>
> In its most-simplistic from, a symbol is but a suitable abstraction of a
> greater reality, similarly to how a symbol of a red-heart might be an
> abstraction of a sentient being. Concept? Context? Meaning? Transaction.
>
> Who, or what decides what the symbolic world should look like and its
> meaningfulness?
>
>
> Is this a rhetorical question?
>
> >> Not really
>
> In the human sphere, the poets, painters, dancers and mathematicians
> decide what the symbolic world should look like, and work very hard to
> capture its meaning in poetry, paintings, movements and equations. I expect
> AGI to struggle with the same issues.
>
> >> Too vague, too generalist. I think symbolism emerges from diversity, or
> more accurately, programs of diversification.
>
> But if you mean "who decides whether symbol #4589342472934 should even
> exist, or what it means?" ... heck I dunno. Some algorithm, the same
> algorithm that decided that symbol #11316372398 is meaningful.
>
> >> The designer decides the system of symbolism. The agi entity has the
> existential prerogative to symbolize.
>
> In between the symbols generated by algorithms and the symbols generated
> by poets are the symbols generated by insurance companies, zoning
> commissions, safety regulators and lawyers. These symbols have names like
> "penal code #234241;para.4.b-addendum 62" and are almost as boring to read
> as reading software.
>
> Do I expect the middle layers of AGI to look like something an insurance
> company or zoning commission might write? Sure, why not?  I expect a
> Chinese Room hard at work, in the middle layers of the AGI.
>
> >> Linear and alinear systems contribute to holistic systems. I think it
> was Checkland who made the point most clear. The discussion flux between
> linear and alinear systems, and considers something in between. Why nto
> simply extract the fractal value of an instance of linear and alinear
> contexts and symbolize that in an agi, as a knowledge node, and so on. That
> was my point on fractals, the search for a pattern of meta superpositions
> (the paradoxes), which may well hold a key to finite-infinity within
> systems. Gell-Mann provided theoretical essence in his discussions on
> scalability (in the sense of boundary-independent form - my words) and his
> notion of intermediate.
>
> Having said all this makes me realize that there is a lot more accepted
> theory than we could imagine. What is required then are the appropriate
> frameworks to put those theories to work in context of an agi service.
>
>
> The global state of social evolution may cause terrible confusion in any
> learning entity. The learning objectives should be specific, not
> generalized. Isn't learning incorrectly worse than not learning at all?
>
>
> That is certainly a hotly debated question in Brexit and Trump
> discussions.  More accurately, the problem is one of not having an accurate
> understanding of the world, and being unable to get one.
>
> >> the clamor for truth vs big data?
>
> its not so much a case of "learning badly", but one of hallucinating:
> hallucinating that things will be better, or worse, if England leaves the
> EU, etc. The combined sensory-system+political-brains seem to be incapable
> of figuring this out.
>
> >> general relativity - self interest drives what we see, and learn.
>
> I expect AGI to hallucinate, too. Just not like us.  Actually, I expect
> AGI to be schizophrenic, psychopathic, and a zilllion other rather negative
> things that are existentially dangerous to humans. That's the tricky part,
> the part that is unpleasant to face.
>
> >> I think agi, as a concept, already is.
>
>
> I think, there should be a general agi-architecture, replete with the
> capacity to develop and function within a generic world view. Furthermore,
> I think the real value would be derived from specialized AGI. Maybe beyond
> that, an AGI architecture would - in future - morph via its own social
> networking and inherent capabilities to become more than the sum of its
> parts.
>
>
> Well, you used lower-case-agi and Upper-Case-AGI there.  There's no such
> thing as an  Upper-Case-AGI architecture -- claiming to have one is like
> claiming to have the blueprints for a rocket-ship to another galaxy.
>
> >> An interesting observation. I have no idea why the case-sensitivity.
> Perhaps it does not really matter at all. How did you come so far on your
> journey to another galaxy without a blueprint? Surely, it must all be just
> luck? No, it's the result of years of driving passion and vision. The
> blueprint must exist in your collective minds.
>
> However,  lower-case-agi-- well, that is more like mountain climbing: you
> try to go one way, until you find that you can't so you try another way,
> until you can.  You explore, looking for routes to get from here to there.
> So, if Upper-Case-AGI is a mountain peak, then we are at the foothills of
> the Himilayas, fumbling and tripping and getting exhausted.  Explorer X
> says that the deep-learning route is promising; explorer Y disagrees.
> Everybody's got a base-camp, and some are busier and livelier than others.
>
>
> To do so, would take a lot more than intersections. I agree with the
> statements made about binary/vector theory, but it seems obvious to me that
> this would not be sufficient for this task. You implied fractals.
>
>
> Heh. Careful with the analogies, there. Fractals are manifestations of
> shift-spaces. Which are infinite-dimensional vector spaces. The last three
> decades have exposed a deep and abstract mathematical theory for
> "fractals".  That theory is ... interesting, but has rather very little to
> do with AGI.  Like pretty much nothing at all
>
> >> every fractal has a distinct boundary. it might be a symbolic black
> hole to some, but I would argue how it's not an "infinite-dimensional
> vector space".  A pure object is a fractal too, as is a context. Not in a
> physical sense, but in an informational sense, where Physics behave in the
> role of the carrier and information in the role of the content being
> carried. I contend how fractals has very much everything to do with AGI.
>
> We, as researchers, may not all be using the same terminology, but the
> concepts you are discussing in your latest response to Rob are not foreign
> to my mind. Perhaps, there are different routes via which to discover AGI,
> and as evidenced within Ben Goertzel's treatise on world religions, there
> exist different paths for different people towards achieving AGI
> enlightenment.
>
> What if we had at our disposal a common language to start putting these
> paths together within a single AGI frame? Imagine, consensus.
>
> PS: I'd think the answer to your compound relational question is: 27. Even
> so,  if a rule was being enforced where a singular, existential method of
> association between X and Y entities were being deployed. Hierarchy itself
> is linear. It can flow over an alinear system.
>
>
> -- linas.
>
>
> To my mind, that would be the only way to proceed. As such, I think the
> primary issue remains a design issue.
>
> Robert Benjamin
>
> ------------------------------
> *From:* Linas Vepstas <[email protected]>
> *Sent:* Monday, 18 February 2019 10:36 PM
> *To:* AGI
> *Subject:* Re: [agi] Some thoughts about Symbols and Symbol Nets
>
>
>
> On Mon, Feb 18, 2019 at 1:17 PM Mike Archbold <[email protected]> wrote:
>
> I'm not sure I completely follow your point, but I sort of get it.
>
> I tend to think of symbols as one type of the "AI stuff" a computer
> uses to think with -- the other main type of "AI stuff" being neural
> networks. These have analogies to the "mind stuff" we use to think
> with.
>
>
> Symbol systems and neural-net systems can be seen to be variants of the
> same thing; two sides of the same coin. I posted an earlier thread on this.
> There's a 50-page long PDF with math, here:
> https://github.com/opencog/opencog/raw/master/opencog/nlp/learn/learn-lang-diary/skippy.pdf
>
> roughly: both form networks. They differ primarily in how they represent
> the networks, and how they assign weights to network connections (and how
> they update weights on network connections).
>
>
> On their own, symbols don't mean anything, of course, and inherently
> don't contain "understanding" in any definition of understanding.
>
> Is there a broad theory of symbols? We kind of proceed with loose
> definitions. I remember reading the Newell and Simon works, and they
> say AI strictly in terms of symbols and LISP (as I recall anyway).
>
>
> Yes. The "broad theory of symbols" is called "model theory" by
> mathematicians. It's highly technical and arcane. It's most prominent
> distinguishing feature as that everything is binary:  it is or it ain't.
> Something is true, or false.  A formula takes values, or there is no such
> formula. A relation binds two things together, or there is no relation.
> There's no blurry middle-ground.
>
> So, conventionally, networks of symbols, and the relations between them,
> and the formulas transforming them -- these form a network, a graph, and
> everything on that network/graph is a zero or a one -- an edge exists
> between two nodes, or it doesn't.
>
> The obvious generalization is to make these fractional, to assign weights.
> Neural nets do this. But neural nets do something else, that they probably
> should not: they jam everything into vectors (or tensors) This is kind-of
> OK, because the algebra of a graph is a lot like the algebra of a vector
> space, and the confusion between the two is an excusable mistake: it takes
> some sophistication to realize that they are only similar, but not the same.
>
> I claim: fix both these things, and you've got a winner.  Use symbolic
> systems, but use fractional values, not 0/1 relations.  Find a good way of
> updating the weights. So, deep-learning is a very effective weight-update
> algorithm. But there are other ways of updating weights too (that are
> probably just as good or better.  Next, clarify the
> vector-space-vs-graph-algebra issue, and then you can clearly articulate
> how to update weights on symbolic systems, as well.
>
> (Quickly explained: probabilities are not rotationally-symmetric under the
> rotation group SO(N) whereas most neural-net vectors are: this is the spot
> where deep-learning "gets it wrong": it incorrectly mixes gibbs training
> functions with rotational symmetry.)
>
> So Jim is right: discarding symbolic systems in favor of neural nets is a
> mistake; the path forward is at the intersection of the two: a net of
> symbols, a net with weights, a net with gradient-descent properties, a net
> with probabilities and probability update formulas.
>
> -- Linas
>
>
> On 2/18/19, Jim Bromer <[email protected]> wrote:
> > Since I realized that the discrete vs weighted arguments are passe I
> > decided that thinking about symbol nets might be a better direction for
> me,
> >
> > 1. A symbol may be an abstracted 'image' of a (relatively) lower level
> > object or system.
> >   An image may consist of a feature of the referent, it may be an icon of
> > the referent or it may be a compressed form of the referent.
> > 2. A symbol may be more like a 'label' for some object or system.
> > 3. A generalization may be represented as an image of what is being
> > generalized but it also may be more of a label.
> > 4. An 'image', as I am using the term, may be derived from a part or
> > feature of an object or from a part of a system but it may be used to
> refer
> > to the object or system.
> > 5. An image or label may be used to represent a greater system. A system
> > may take on different appearances from different vantage points, and
> > analogously, some features of interest may be relevant in one context but
> > not from another context. A symbol may be correlated with some other
> > 'object' and may stand as a referent to it in some contexts.
> >
> > So, while some symbols may be applied to or projected onto a 'lower'
> corpus
> > of data, others would need to use an image to project onto the data
> field.
> > I use the term, 'lower' somewhat ambiguously, because I think it is
> useful
> > to symbolize a system of symbols so a 'higher' abstraction of a system
> > might also be used at the same level. And it seems that a label would
> have
> > to be associated with some images if it was to be projected against the
> > data.
> >
> > One other thing. This idea of projecting a symbol image onto some data,
> in
> > order to compare the image with some features of the data, seems like it
> > has fallen out of favor with the advancements of dlnns and other kinds of
> > neural nets. Projection seems like such a fundamental process that I
> cannot
> > see why it should be discarded just because it would be relatively slow
> > when used with symbol nets. And, there are exceptions, GPUs, for example,
> > love projecting one image onto another.
> > Jim Bromer
> 
> 
> --
> cassette tapes - analog TV - film cameras - you
> 
> --
> cassette tapes - analog TV - film cameras - you
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/Tcc0e554e7141c02f-M6f5b4a8d2ee3f5bdf718c233>


-- 
Stefan Reich
BotCompany.de // Java-based operating systems

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tcc0e554e7141c02f-M745d9f703d0045eaf2b8333c
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to