Introducing ambiguity: I draw a distinction between knowledge and intelligence, 
as the two are similarly different. Not all knowledge is regarded as being 
intelligence, but all intelligence is regarded as being knowledge. On an 
existential scale, that would place intelligence evolutionary higher than 
knowledge. Abstracting this further, all understanding (in the sense of a basis 
for AGI reasoning) may be regarded as being both intelligence and knowledge, 
but not all knowledge and/or intelligence may be regarded as being 
understanding. Suddenly, emerges a knowledge branch, perhaps as a genetic step?

What I just said may not be true at all, but in context of placing the three 
concepts (knowledge, intelligence, understanding) in a rule-based framework and 
justifying these statements, the stated associative "relationships" may be true 
enough to proceed with a reliable-enough AGI model, as a testable version of 
AGI, as a basis for learning.

The moment relationships of any functional value (associations), and any 
framework of hierarchy (systems) can be established and tested against all 
known (domain) knowledge, and even changed if the rules driving such a 
hierarchy should change (adapted), it may be regarded as a concrete version of 
a probabilistic framework.

Enters the old data, information, knowledge, wisdom argument. Some of the bits, 
when classified in terms of systemic value, are probably absolutely true, but 
which ones are they? Again, using the same reasoning framework as in the 
previous paragraphs, the machine could probably figure this out. It all depends 
on what framework of measurement, scale, reference point, etc. is being used. 
More on this point later.


Agreed, hard facts (i.e., biological age, etc.) may differ in their intrinsic 
absoluteness and processed accordingly by using computational models, but even 
so, the very notion of AGI would not have existed, unless we had humans and 
computational frameworks to compare it against.

To contend: Probability may not be a "good" basis for AGI, similarly as love 
may not be a good basis for marriage, but what might just be a "good" basis is 
a reliable engine (reasoning and unreasoning computational framework) for 
managing relativity with. This is where philosophy started from, unraveling a 
reasoning ontology.

Herein lies the deabstraction logic, in the ability to reliably and 
consistently make sense of how things systemize at any class of abstraction 
(conceptual, and/or logical and /or physical), then to adaptively progress 
(read evolutionary step) to another level of abstraction thereafter (repeat: 
conceptual, and/or logical, and /or physical) on the basis of such 
sense-making, but not on that basis alone.

It is simple logic really, provided methodology (method and management system) 
of abstraction/deabstraction existed. Accepted, there may be other 
methodologies at work as well in any single, instantiation of an AGI outcome, 
not just one of abstraction/deabstraction.

The collective singularity of the overall AGI system falls within the domain of 
"unreason" (not knowing why we know - for now), but reason (knowing why we now 
know) persists as a realistic, constructivist platform. Reason is the boundary.

I think, for all of "this" to succeed functionally, a reliable method for 
"most-true-for-now" classification is key.

________________________________
From: Jim Bromer <[email protected]>
Sent: 12 April 2017 12:59 AM
To: AGI
Subject: Re: [agi] I Still Do Not Believe That Probability Is a Good Basis for 
AGI


>>Closing with a lingering afterthought; If all intelligence was relative, 
>>surely all intelligence must be probable.<<

Not necessarily. This is a case where a conclusion that can be interpreted 
using different kinds of abstractions is assigned one particular abstraction. 
It is a little like an exaggeration. You can use probable methods on relative 
knowledge (or knowledge that can be seen as relativistic), but that is not the 
only abstraction (abstract process) that would be needed by a would-be AGI 
program to 'understand' that knowledge.

Jim Bromer


On Tue, Apr 11, 2017 at 10:07 AM, Nanograte Knowledge Technologies 
<[email protected]<mailto:[email protected]>> wrote:

The purpose of specification is to unify the design. It is not up to 
programmers to re-invent the design, but to apply themselves fully to realizing 
the functional objectives they are assigned to. Thus, the issue should not be 
one of managing programmers, but specification and programming competency. 
Nothing new here, except for as you correctly pointed out, the level of 
competency to both specify and translate an AGI design into pseudo code (in the 
sense of programmable logic) and for programmers to be able to translate that 
into machine code.

I agree with the frustration to specify what exactly would constitute AGI at a 
logical and physical level. The knowhow you are referring to in terms of which 
knowledge schema to use is most valid. Further, your point on the physical 
constraints of computing platforms is generally well noted internationally. 
Obvious room for improvement.

However, technically, it is now possible for a workable hardware/software 
platform to be assembled to test AGI components with. Further, new programming 
tool(s) exist to code AGI logic with.

Practically, the AGI logic is missing. It is this logic, which I assert to be 
available in a distributed form throughout the world. Irrespective if one 
considers this from a programming or logic perspective, the pseudo code still 
has to be written, coded, and tested.

We have reached a tangible point in AGI, which is: "Show us the pseudo code." 
And the response to that: "Pseudo code for what? " should become most relevant. 
It is that "what", which would ultimately define AGI.

Let me ask it in this way then: "Is there somewhere int he world today, a 
center or institution, where the passionate few could go to in order to 
collaborative specify this pseudo code for a version of AGI, where programmers 
and tools and a test platform is made ready to test this logic? I am not aware 
of such a place.

Should such pseudo code be written for free, programmed for free, and tested 
for free? Never. Someone has to fund it, and fund it properly.

Unless, we pitted our design and programming competencies against AGI (which is 
the challenge before us) within a suitable SDLC, we would not know whether or 
not yours, mine, or anyone else's version, or collaborative versions, of 
approaching AGI would ever work. I am not smart enough to program this logic, 
but I may have been smart enough to co-write the pseudo code.

In the absence of the collaborative laboratory, would we ever know?  If only 
you were proven correct, this AGI question may be put to bed.

Closing with a lingering afterthought; If all intelligence was relative, surely 
all intelligence must be probable.





________________________________
From: Jim Bromer <[email protected]<mailto:[email protected]>>
Sent: 11 April 2017 11:58 AM
To: AGI
Subject: Re: [agi] I Still Do Not Believe That Probability Is a Good Basis for 
AGI

Co operation is impossible because people have different ideas about how it 
should be done and as problems are noticed (management for example), the tasks 
that need to be done become diversified in a non-focused way. So we are now 
talking about managing people. I could turn this back to the essence of what we 
were talking about before by mentioning the programmed management that would be 
needed for a complicated AGI program. I think relatively simple guidelines 
about abstractions could be easily automated. So if my theory about abstraction 
is valid, then they could lead to some simple programming design that would 
incorporate them. But the problem is that the design I have in mind would not 
(for example) run as a neural network. Continuing with refocusing your ideas 
about management back onto a discussion about programming AGI (as if you were 
subconsciously talking about programming rather than managing programmers) I 
would point out that most AGI paradigms do not produce results that can be 
efficiently used by competing paradigms. So there would be a serious management 
issue there. For example a neural network cannot be examined (by the program) 
in order (for the program) to determine what abstractions it had used to come 
to a conclusion. A weighted graph (a probability network) should be better at 
this but here the problem is that the stages of the process have to be saved in 
order for an advancement like this to work. The efficiency the method would 
then be lost because it would become memory exhaustive. If a system 
incorporated (more) discrete abstractions a trace of a decision process could 
be made based on the abstracting principles that were discovered to be useful 
to examine the process. (This is a function of meta-analysis or meta-awareness.)

Management of people is largely based on a predetermination that some focused 
goals are reasonable. Even if creativity is emphasized, the push is to 
creatively solve the narrow tasks that you are assigned. As the workers are 
given more autonomy to reach for a relatively more general goal, the 
coordination of the methodologies and goals will be lost.
AGI | Archives<https://www.listbox.com/member/archive/303/=now> 
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]
 <https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc>  | 
Modify<https://www.listbox.com/member/?&;> Your Subscription      
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
 <http://www.listbox.com>

AGI | Archives<https://www.listbox.com/member/archive/303/=now> 
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]
 <https://www.listbox.com/member/archive/rss/303/24379807-653794b5>  | 
Modify<https://www.listbox.com/member/?&;> Your Subscription      
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
 <http://www.listbox.com>

AGI | Archives<https://www.listbox.com/member/archive/303/=now> 
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]
 <https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc>  | 
Modify<https://www.listbox.com/member/?&;> Your Subscription 
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
 <http://www.listbox.com>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to