I think that computational models can be fully integrated into probability
nets but I think that there are some important computational functions
(algorithms that need to run efficiently) that are missing.

As I tried to understand what you were asking I realized that I could
reanalyze my thoughts and come up with variations of the abstractions of
the ideas I had been thinking about. This can partly be explained by the
way your remarks affected my thinking but it also has to be explained by
the recognition that my own meta-analysis of my ideas helped me to further
derive (or form) my thoughts and in doing so I created new abstractions to
work with. Why can't AI do this kind of thing? Regardless of what you think
about my point of view on probability nets, this ability to examine (or
reexamine) a thought out idea seems fundamental to AI and yet it has been a
very elusive goal in the field so far.

So while Deep Nets in combination with other methods have made some
dramatic advances in AI, a basic essence of human thought seems to
be lagging badly.

Probability Nets should be better at this than more mundane Neural Nets.
Why aren't they? I think the answer is that they are most efficient when
they obscure the various relationships of abstractions (or abstraction-like
processes) that they operate on. If this is what is going on then discrete
methods should be better at this. Why aren't they? In my opinion there are
fundamental discrete algorithms that are missing.

The abstraction dilemma (that I mentioned but did not describe in any
detail) is an example of the problem. But is it possible that the
abstraction dilemma is a problem just because it is not a fundamental
process of AI reasoning? I think that may be a possible explanation.

Jim Bromer

On Mon, Apr 10, 2017 at 2:05 AM, Nanograte Knowledge Technologies <
[email protected]> wrote:

> Thanks Jim. That was a good read that got me thinking.
>
> What if probability graphs/nets  were seamlessly integrated with
> computation arithmetic via a reliable translation or deabstraction schema?
> Meaning, each already have their own models. Within computer science, are
> they mutually exclusive, or is it more of a case where the work has simply
> not been done yet? Was fuzzy logic not aiming for such a model?
>
>
>
>
> ------------------------------
> *From:* Jim Bromer <[email protected]>
> *Sent:* 09 April 2017 07:35 PM
> *To:* AGI
> *Subject:* [agi] I Still Do Not Believe That Probability Is a Good Basis
> for AGI
>
> I still do not believe that probability nets or probability graphs
> represent the best basis for AGI. The advances that have been made with
> probability nets can be explained by pointing out that it makes sense that
> (relatively) large numbers of groups using more crude methods (that are
> shown to have some effectiveness) will be likely to produce early advances.
> When Spock announces the probability evaluation of some estimate that he
> has made of a future occurrence it is humorous to many fans of Star Trek
> just because it is such an absurd ability for a human to make use of.
> Certain mathematicians (and savants) can make extraordinary calculations
> but there is little evidence that they are using these calculations in
> their sound everyday reasoning.
>
> I have pointed out that addition and multiplication using n-ary base
> number systems were extraordinary achievements. Computers were designed to
> do arithmetic. So if your AI programming can effectively exploit the
> leverage that computational arithmetic enjoys then you should be able to
> make some advances in the field.
>
> Although logical reasoning can be formed using computational arithmetic,
> there is something clearly missing in the field The p vs np problem
> illustrates this. However, I do not think that a solution of p=np is
> necessary for important and significant advances to be made in
> computational logic. There have been times when advances in logic were made
> even though p=np was not achieved. For example, some advances were made  in
> the 1990s using probability relations. (My guess is that the more
> significant advances were looking at special cases.)  This does not mean
> that I think the probability must be the basis for innovations in logic.
>
> I believe that the distinctions between different methods of abstraction
> will be necessary to make truly significant advances in AGI. I compare this
> issue to be similar to the problem that Cauchy solved by being, " ...one of
> the first to state and prove theorems of calculus rigorously, rejecting the
> heuristic principle of the generality of algebra of earlier authors."
> (quote taken from Wikipedia).
>
> I am not imagining myself to be a AGI-Abstraction Cauchy and I am not
> saying that AGI theory has to be stated and proved using rigorous theorems.
> I just think that the logic of abstraction has to be more clearly defined.
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to