>>Closing with a lingering afterthought; If all intelligence was relative, surely all intelligence must be probable.<<
Not necessarily. This is a case where a conclusion that can be interpreted using different kinds of abstractions is assigned one particular abstraction. It is a little like an exaggeration. You can use probable methods on relative knowledge (or knowledge that can be seen as relativistic), but that is not the only abstraction (abstract process) that would be needed by a would-be AGI program to 'understand' that knowledge. Jim Bromer On Tue, Apr 11, 2017 at 10:07 AM, Nanograte Knowledge Technologies < [email protected]> wrote: > The purpose of specification is to unify the design. It is not up to > programmers to re-invent the design, but to apply themselves fully to > realizing the functional objectives they are assigned to. Thus, the issue > should not be one of managing programmers, but specification and > programming competency. Nothing new here, except for as you correctly > pointed out, the level of competency to both specify and translate an AGI > design into pseudo code (in the sense of programmable logic) and for > programmers to be able to translate that into machine code. > > I agree with the frustration to specify what exactly would constitute AGI > at a logical and physical level. The knowhow you are referring to in terms > of which knowledge schema to use is most valid. Further, your point on the > physical constraints of computing platforms is generally well noted > internationally. Obvious room for improvement. > > However, technically, it is now possible for a workable hardware/software > platform to be assembled to test AGI components with. Further, new > programming tool(s) exist to code AGI logic with. > > Practically, the AGI logic is missing. It is this logic, which I assert to > be available in a distributed form throughout the world. Irrespective if > one considers this from a programming or logic perspective, the pseudo code > still has to be written, coded, and tested. > > We have reached a tangible point in AGI, which is: "Show us the pseudo > code." And the response to that: "Pseudo code for what? " should become > most relevant. It is that "what", which would ultimately define AGI. > > Let me ask it in this way then: "Is there somewhere int he world today, a > center or institution, where the passionate few could go to in order to > collaborative specify this pseudo code for a version of AGI, where > programmers and tools and a test platform is made ready to test this logic? > I am not aware of such a place. > > Should such pseudo code be written for free, programmed for free, and > tested for free? Never. Someone has to fund it, and fund it properly. > > Unless, we pitted our design and programming competencies against AGI > (which is the challenge before us) within a suitable SDLC, we would not > know whether or not yours, mine, or anyone else's version, or collaborative > versions, of approaching AGI would ever work. I am not smart enough to > program this logic, but I may have been smart enough to co-write the pseudo > code. > > In the absence of the collaborative laboratory, would we ever know? If > only you were proven correct, this AGI question may be put to bed. > > Closing with a lingering afterthought; If all intelligence was relative, > surely all intelligence must be probable. > > > > > ------------------------------ > *From:* Jim Bromer <[email protected]> > *Sent:* 11 April 2017 11:58 AM > *To:* AGI > *Subject:* Re: [agi] I Still Do Not Believe That Probability Is a Good > Basis for AGI > > Co operation is impossible because people have different ideas about how > it should be done and as problems are noticed (management for example), the > tasks that need to be done become diversified in a non-focused way. So we > are now talking about managing people. I could turn this back to the > essence of what we were talking about before by mentioning the programmed > management that would be needed for a complicated AGI program. I think > relatively simple guidelines about abstractions could be easily automated. > So if my theory about abstraction is valid, then they could lead to some > simple programming design that would incorporate them. But the problem is > that the design I have in mind would not (for example) run as a neural > network. Continuing with refocusing your ideas about management back onto a > discussion about programming AGI (as if you were subconsciously talking > about programming rather than managing programmers) I would point out that > most AGI paradigms do not produce results that can be efficiently used by > competing paradigms. So there would be a serious management issue there. > For example a neural network cannot be examined (by the program) in order > (for the program) to determine what abstractions it had used to come to a > conclusion. A weighted graph (a probability network) should be better at > this but here the problem is that the stages of the process have to be > saved in order for an advancement like this to work. The efficiency the > method would then be lost because it would become memory exhaustive. If a > system incorporated (more) discrete abstractions a trace of a decision > process could be made based on the abstracting principles that were > discovered to be useful to examine the process. (This is a function of > meta-analysis or meta-awareness.) > > Management of people is largely based on a predetermination that some > focused goals are reasonable. Even if creativity is emphasized, the push is > to creatively solve the narrow tasks that you are assigned. As the > workers are given more autonomy to reach for a relatively more general > goal, the coordination of the methodologies and goals will be lost. > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc> | > Modify <https://www.listbox.com/member/?&> Your Subscription > <http://www.listbox.com> > > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> | > Modify > <https://www.listbox.com/member/?&> > Your Subscription <http://www.listbox.com> > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
