Agreed on all counts. No deabstraction logic could become programmable without 
it, at least, being specified in pseudo code. My experience with this problem 
taught that it would require a large, dedicated team of extremely-smart 
programmers to be able to write the code. Further, they would have to employ 
the most-correct programming language tot he highest standards.

But, regardless of the quality of the programmer, as coder, it is the quality 
of the specification architect (as coder too) that holds complete relevance. 
And as you aptly pointed out, hardly would that reside in a single individual 
to imagine, reason, transcribe and codify (in any form) such complex-adaptive 
logic.

I have zero doubt that the logic for AGI rests completed in various pockets of 
IP all over the world. What seems impossible to do is for a single 
organizational unit to effectively locate, collate and organize work units to 
get the job done to prototype level, most effectively - and to fund it.

As evidenced, all efforts to unite and collaborate on AGI, on this forum, have 
seemingly failed. One of the first tenets of KM, which were indicated via 
research in the 1990's, was that knowledge workers in general do not want to 
willingly share their strongest knowledge. Later, applied research into the 
role of knowledge in KM organizations with respect to human survival (as 
instinctive competition) found support for the tenet (Refer Karl Mannheim). 
Seems then, KM evolved into a covert-like approach on how to secretively 
harvest knowledge off social media, forums, and other-motivated schemes (e.g., 
FB, LinkedIn, Twitter, MSN, GPS-apps, shopper Loyalty Programs, Google, and so 
on).

In other words, I think the problem of progressing AGI more rapidly is more a 
People and Organization problem, and less a Technology and Process problem (on 
probability). Further, unless the collaborative and resource problems are 
resolved, it seems highly unlikely that progress in developing AGI would be 
quick. At best, and as witnessed, it is going to be frustratingly sporadic. 
Maybe someone will actually make the unifying breakthrough, but not if the IP 
has not already been tendered somewhere public, ready to be harvested.

IBM, for one, have been guilty of harvesting globally and claiming it all to be 
their own, but they know that isn't so. Sooner, or later, the originators of 
the IP used without paying homage will catch up with them. When that happens, 
it may set back AGI quite a few years.

Which begs the final question; who owns AGI? I think AGI belongs to the world 
at large. To develop AGI for the world, may require a sharing-economic model 
instead of the grabbing the ICT industry have been suffering under for so long.

Just my few thoughts on the matter.

Robert Benjamin



________________________________
From: Jim Bromer <[email protected]>
Sent: 11 April 2017 01:21 AM
To: AGI
Subject: Re: [agi] I Still Do Not Believe That Probability Is a Good Basis for 
AGI

I think that computational models can be fully integrated into probability nets 
but I think that there are some important computational functions (algorithms 
that need to run efficiently) that are missing.

As I tried to understand what you were asking I realized that I could reanalyze 
my thoughts and come up with variations of the abstractions of the ideas I had 
been thinking about. This can partly be explained by the way your remarks 
affected my thinking but it also has to be explained by the recognition that my 
own meta-analysis of my ideas helped me to further derive (or form) my thoughts 
and in doing so I created new abstractions to work with. Why can't AI do this 
kind of thing? Regardless of what you think about my point of view on 
probability nets, this ability to examine (or reexamine) a thought out idea 
seems fundamental to AI and yet it has been a very elusive goal in the field so 
far.

So while Deep Nets in combination with other methods have made some dramatic 
advances in AI, a basic essence of human thought seems to be lagging badly.

Probability Nets should be better at this than more mundane Neural Nets. Why 
aren't they? I think the answer is that they are most efficient when they 
obscure the various relationships of abstractions (or abstraction-like 
processes) that they operate on. If this is what is going on then discrete 
methods should be better at this. Why aren't they? In my opinion there are 
fundamental discrete algorithms that are missing.

The abstraction dilemma (that I mentioned but did not describe in any detail) 
is an example of the problem. But is it possible that the abstraction dilemma 
is a problem just because it is not a fundamental process of AI reasoning? I 
think that may be a possible explanation.

Jim Bromer

On Mon, Apr 10, 2017 at 2:05 AM, Nanograte Knowledge Technologies 
<[email protected]<mailto:[email protected]>> wrote:

Thanks Jim. That was a good read that got me thinking.

What if probability graphs/nets  were seamlessly integrated with computation 
arithmetic via a reliable translation or deabstraction schema? Meaning, each 
already have their own models. Within computer science, are they mutually 
exclusive, or is it more of a case where the work has simply not been done yet? 
Was fuzzy logic not aiming for such a model?





________________________________
From: Jim Bromer <[email protected]<mailto:[email protected]>>
Sent: 09 April 2017 07:35 PM
To: AGI
Subject: [agi] I Still Do Not Believe That Probability Is a Good Basis for AGI

I still do not believe that probability nets or probability graphs represent 
the best basis for AGI. The advances that have been made with probability nets 
can be explained by pointing out that it makes sense that (relatively) large 
numbers of groups using more crude methods (that are shown to have some 
effectiveness) will be likely to produce early advances. When Spock announces 
the probability evaluation of some estimate that he has made of a future 
occurrence it is humorous to many fans of Star Trek just because it is such an 
absurd ability for a human to make use of. Certain mathematicians (and savants) 
can make extraordinary calculations but there is little evidence that they are 
using these calculations in their sound everyday reasoning.

I have pointed out that addition and multiplication using n-ary base number 
systems were extraordinary achievements. Computers were designed to do 
arithmetic. So if your AI programming can effectively exploit the leverage that 
computational arithmetic enjoys then you should be able to make some advances 
in the field.

Although logical reasoning can be formed using computational arithmetic, there 
is something clearly missing in the field The p vs np problem illustrates this. 
However, I do not think that a solution of p=np is necessary for important and 
significant advances to be made in computational logic. There have been times 
when advances in logic were made even though p=np was not achieved. For 
example, some advances were made  in the 1990s using probability relations. (My 
guess is that the more significant advances were looking at special cases.)  
This does not mean that I think the probability must be the basis for 
innovations in logic.

I believe that the distinctions between different methods of abstraction will 
be necessary to make truly significant advances in AGI. I compare this issue to 
be similar to the problem that Cauchy solved by being, " ...one of the first to 
state and prove theorems of calculus rigorously, rejecting the heuristic 
principle of the generality of algebra of earlier authors." (quote taken from 
Wikipedia).

I am not imagining myself to be a AGI-Abstraction Cauchy and I am not saying 
that AGI theory has to be stated and proved using rigorous theorems. I just 
think that the logic of abstraction has to be more clearly defined.
AGI | Archives<https://www.listbox.com/member/archive/303/=now> 
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]
 <https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc>  | 
Modify<https://www.listbox.com/member/?&;> Your Subscription      
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
 <http://www.listbox.com>

AGI | Archives<https://www.listbox.com/member/archive/303/=now> 
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]
 <https://www.listbox.com/member/archive/rss/303/24379807-653794b5>  | 
Modify<https://www.listbox.com/member/?&;> Your Subscription      
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
 <http://www.listbox.com>

AGI | Archives<https://www.listbox.com/member/archive/303/=now> 
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]
 <https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc>  | 
Modify<https://www.listbox.com/member/?&;> Your Subscription 
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
 <http://www.listbox.com>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to