All components of thought are abstractions.

AI programs do not seem capable of much reflection of their own thinking.
This shows that there is a general lack of ability to form effective
abstractions (that will then be subsequently useful) and this means that
they are unable to think about things in a way that we seem to be able.
Ancient philosophy suggests that these abstractions need to be discovered.

We need to (or we would need to) use weighted reasoning and discrete
reasoning in AGI. I believe that discrete reasoning is a better basis for
AGI. If so, then how do I explain the advances that weighted reasoning have
been making in the last few years? My opinion is that they are more
efficient but that efficiency comes at a cost (as seen in Neural Nets)
where the 'thought process' actually obscures the abstractions (or
abstraction-like characteristics) that are used. This means that reflection
and more human-like reasoning are practically out of reach for contemporary
neural networks. Probability Nets or Graphs should be better at this
because they can represent abstractions a little more clearly. Why aren't
they? It is because the efficiency that they offer is based on their
ability to resolve an event without exploring the nature of the
abstractions that were used in building the (parts of the) net. So in order
for a probability net to be capable of more human-like reasoning
elaborations would have to be built on them.  But these elaborations would
have the tendency to compromise the efficiencies of the method which
make it so interesting.

Why do I reach this conclusion? Because the elaborations that are needed to
deal with abstractions in a more concrete manner would make the system
exhibit the 'p does not equal np' problem. As components become more
discrete you lose the efficiencies that systems that combine different
types of abstractions through simple computations.

On the other hand, since abstractions seem to be obscured from easy human
inspection this might suggest that the discovery of these abstractions may
be obscure in human thought as well. So I might be wrong.

Why is weighted reasoning more efficient? Because the computer was designed
around binary mathematics and any system that can effectively use binary
computations will be more efficient. But these efficient systems utilize
abstractions in such a way that they obscure their use in the 'thought
process'. The abstractions that are used in a thought process like that
cannot be derived from a trace of the process without more exhaustive
exploration of the relationships.

I do not use valuations in my everyday thinking. -This is .893 correct.-
That sounds like nonsense. On the other hand I do not use elaborate formal
logic in my every day thinking either. I may use logic but it seems to be
applied in concise ways to ad-hoc informal systems of thought.

I am not exactly sure what I was thinking of when I mentioned the
abstraction dilemma but there are many dilemmas that I could mention. For
example, when using weighted reasoning, different abstractions will be used
to produce different kinds of weights. When the weights of the different
kinds of things are combined it will produce results which do not resolve
to any of the particular kinds of abstractions used in the processes. So a
program might combine a probability (that comes from an indirect
measurement of an event) with a value that represents an estimate of an
uncertainty along with a value that represents some other conditional and
highly targeted measurement.

Another kind of dilemma is due to the fact that abstractions in human
thought are relativistic. In order to think about your use of abstractions
you have to call on other abstractions and they will necessarily modify the
abstractions used in your previous thought.

This kind of discussion, without anything novel to offer, may seem
irrelevant. So to try to think about this in a more creative way I am
starting to wonder if a computational system could be designed  so that it
works with abstractions (of thought) in a more carefully defined way. Since
we want the AGI program to be able to learn it would need to be able to
create it's own abstractions and to discover the significant relationships
between these abstractions.

One other thing I am wondering about is whether a similar system could be
used with the P=NP? problem in traditional logic. Here the nature of the
application is different but it would have to involve variations of ways to
encode a logical statement that might be developed as the statement was
analyzed.

Jim Bromer

On Tue, Apr 11, 2017 at 8:25 PM, Mike Archbold <[email protected]> wrote:

> Jim, Interesting thoughts.  You mention an "abstraction dilemma (that
> I mentioned but did not describe in any detail)."  I kind of got the
> feel for what you were talking about, but I still don't really see
> what you mean, other than it is tough to go from different levels and
> types of abstractions.  To me every thought is an abstraction except
> the proposition EVERYTHING -> EVERYTHING (both subject and predicate
> including all possible whatevers...)
>
> Nangrate:  what do you mean by deabstraction?  To me that conjures up
> images of making something concrete.
>
> Mike A
>
>
>
> On 4/11/17, Jim Bromer <[email protected]> wrote:
> >>>Closing with a lingering afterthought; If all intelligence was relative,
> > surely all intelligence must be probable.<<
> >
> > Not necessarily. This is a case where a conclusion that can be
> interpreted
> > using different kinds of abstractions is assigned one particular
> > abstraction. It is a little like an exaggeration. You can use probable
> > methods on relative knowledge (or knowledge that can be seen as
> > relativistic), but that is not the only abstraction (abstract process)
> that
> > would be needed by a would-be AGI program to 'understand' that knowledge.
> >
> > Jim Bromer
> >
> >
> > On Tue, Apr 11, 2017 at 10:07 AM, Nanograte Knowledge Technologies <
> > [email protected]> wrote:
> >
> >> The purpose of specification is to unify the design. It is not up to
> >> programmers to re-invent the design, but to apply themselves fully to
> >> realizing the functional objectives they are assigned to. Thus, the
> issue
> >> should not be one of managing programmers, but specification and
> >> programming competency. Nothing new here, except for as you correctly
> >> pointed out, the level of competency to both specify and translate an
> AGI
> >> design into pseudo code (in the sense of programmable logic) and for
> >> programmers to be able to translate that into machine code.
> >>
> >> I agree with the frustration to specify what exactly would constitute
> AGI
> >> at a logical and physical level. The knowhow you are referring to in
> >> terms
> >> of which knowledge schema to use is most valid. Further, your point on
> >> the
> >> physical constraints of computing platforms is generally well noted
> >> internationally. Obvious room for improvement.
> >>
> >> However, technically, it is now possible for a workable
> hardware/software
> >> platform to be assembled to test AGI components with. Further, new
> >> programming tool(s) exist to code AGI logic with.
> >>
> >> Practically, the AGI logic is missing. It is this logic, which I assert
> >> to
> >> be available in a distributed form throughout the world. Irrespective if
> >> one considers this from a programming or logic perspective, the pseudo
> >> code
> >> still has to be written, coded, and tested.
> >>
> >> We have reached a tangible point in AGI, which is: "Show us the pseudo
> >> code." And the response to that: "Pseudo code for what? " should become
> >> most relevant. It is that "what", which would ultimately define AGI.
> >>
> >> Let me ask it in this way then: "Is there somewhere int he world today,
> a
> >> center or institution, where the passionate few could go to in order to
> >> collaborative specify this pseudo code for a version of AGI, where
> >> programmers and tools and a test platform is made ready to test this
> >> logic?
> >> I am not aware of such a place.
> >>
> >> Should such pseudo code be written for free, programmed for free, and
> >> tested for free? Never. Someone has to fund it, and fund it properly.
> >>
> >> Unless, we pitted our design and programming competencies against AGI
> >> (which is the challenge before us) within a suitable SDLC, we would not
> >> know whether or not yours, mine, or anyone else's version, or
> >> collaborative
> >> versions, of approaching AGI would ever work. I am not smart enough to
> >> program this logic, but I may have been smart enough to co-write the
> >> pseudo
> >> code.
> >>
> >> In the absence of the collaborative laboratory, would we ever know?  If
> >> only you were proven correct, this AGI question may be put to bed.
> >>
> >> Closing with a lingering afterthought; If all intelligence was relative,
> >> surely all intelligence must be probable.
> >>
> >>
> >>
> >>
> >> ------------------------------
> >> *From:* Jim Bromer <[email protected]>
> >> *Sent:* 11 April 2017 11:58 AM
> >> *To:* AGI
> >> *Subject:* Re: [agi] I Still Do Not Believe That Probability Is a Good
> >> Basis for AGI
> >>
> >> Co operation is impossible because people have different ideas about how
> >> it should be done and as problems are noticed (management for example),
> >> the
> >> tasks that need to be done become diversified in a non-focused way. So
> we
> >> are now talking about managing people. I could turn this back to the
> >> essence of what we were talking about before by mentioning the
> programmed
> >> management that would be needed for a complicated AGI program. I think
> >> relatively simple guidelines about abstractions could be easily
> >> automated.
> >> So if my theory about abstraction is valid, then they could lead to some
> >> simple programming design that would incorporate them. But the problem
> is
> >> that the design I have in mind would not (for example) run as a neural
> >> network. Continuing with refocusing your ideas about management back
> onto
> >> a
> >> discussion about programming AGI (as if you were subconsciously talking
> >> about programming rather than managing programmers) I would point out
> >> that
> >> most AGI paradigms do not produce results that can be efficiently used
> by
> >> competing paradigms. So there would be a serious management issue there.
> >> For example a neural network cannot be examined (by the program) in
> order
> >> (for the program) to determine what abstractions it had used to come to
> a
> >> conclusion. A weighted graph (a probability network) should be better at
> >> this but here the problem is that the stages of the process have to be
> >> saved in order for an advancement like this to work. The efficiency the
> >> method would then be lost because it would become memory exhaustive. If
> a
> >> system incorporated (more) discrete abstractions a trace of a decision
> >> process could be made based on the abstracting principles that were
> >> discovered to be useful to examine the process. (This is a function of
> >> meta-analysis or meta-awareness.)
> >>
> >> Management of people is largely based on a predetermination that some
> >> focused goals are reasonable. Even if creativity is emphasized, the push
> >> is
> >> to creatively solve the narrow tasks that you are assigned. As the
> >> workers are given more autonomy to reach for a relatively more general
> >> goal, the coordination of the methodologies and goals will be lost.
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to