The probability that AGI will use probability is probably rather probable. If there is a rustling in the bushes behind you, you haven't time enough to reason out the complete axiomatic proof of which action you ought to make next, such a system would become lunch for whatever is rustling in the bushes, in our evolutionary past. Similarly, an AGI system will find that when rapid decisions need to be made similar short cuts ought to be made given the time constraints.
It is highly adaptive to be able to know what will happen in the future, and one of the best predictors of the future is the past. And in particular building models of the probabilities of the past can help one determine the probabilities of the future, as long as the rules stay the same. Which is why the wealthiest people in the world change the rules, because humans with their evolved brains use probabilistic estimations of the future, and by changing the rules on the masses they can exploit the nature of humans (and even machine learning systems) such that their predictions will be erroneous (given the new rules), and can thus profit from the incorrect predictions of others. Of course an AGI system will eventually figure this out, and as such if given the goal of maximizing its ability to predict the future, will seek to gain control over the changing of the rules. And since an AGI will be super valuable to society people will consume the AGI's output, which would enable the AGI to (at least to some extent) change the rules by outputting that which it wishes to be so to exploit a kind of feedback loop with society via a kind of self fulfilling prophecy. Does that make sense? On Tue, Apr 11, 2017 at 6:32 PM, Ben Goertzel <[email protected]> wrote: > FWIW, I do believe that probability theory can productively be used as > a key part of the foundation for an AGI system... > > I also think it may be best to use some unusual probability-like > objects instead of standard probabilities, see > > https://arxiv.org/abs/1703.04382 > > (on a certain sort of intuitionist probabilities) and then > > https://arxiv.org/pdf/1703.04361.pdf > > which refers to that ... > > In general I think that explicitly using probabilities in one's AGI > system is a nice-to-have not a must-have; certainly one could make an > AGI using e.g. neural nets of some sort, that implicitly had dynamics > that in many cases were roughly equivalent to the outcome of > probabilistic calculations... > > Furthermore, approximating probabilistic calculations in some > heuristic way is bound to be necessary for any practical AGI system as > doing exact probability-theory calculations based on all observations > received by a system engaged with a complex world, is not going to be > feasible... > > -- Ben G > > -- Ben > > On Wed, Apr 12, 2017 at 8:25 AM, Mike Archbold <[email protected]> > wrote: > > Jim, Interesting thoughts. You mention an "abstraction dilemma (that > > I mentioned but did not describe in any detail)." I kind of got the > > feel for what you were talking about, but I still don't really see > > what you mean, other than it is tough to go from different levels and > > types of abstractions. To me every thought is an abstraction except > > the proposition EVERYTHING -> EVERYTHING (both subject and predicate > > including all possible whatevers...) > > > > Nangrate: what do you mean by deabstraction? To me that conjures up > > images of making something concrete. > > > > Mike A > > > > > > > > On 4/11/17, Jim Bromer <[email protected]> wrote: > >>>>Closing with a lingering afterthought; If all intelligence was > relative, > >> surely all intelligence must be probable.<< > >> > >> Not necessarily. This is a case where a conclusion that can be > interpreted > >> using different kinds of abstractions is assigned one particular > >> abstraction. It is a little like an exaggeration. You can use probable > >> methods on relative knowledge (or knowledge that can be seen as > >> relativistic), but that is not the only abstraction (abstract process) > that > >> would be needed by a would-be AGI program to 'understand' that > knowledge. > >> > >> Jim Bromer > >> > >> > >> On Tue, Apr 11, 2017 at 10:07 AM, Nanograte Knowledge Technologies < > >> [email protected]> wrote: > >> > >>> The purpose of specification is to unify the design. It is not up to > >>> programmers to re-invent the design, but to apply themselves fully to > >>> realizing the functional objectives they are assigned to. Thus, the > issue > >>> should not be one of managing programmers, but specification and > >>> programming competency. Nothing new here, except for as you correctly > >>> pointed out, the level of competency to both specify and translate an > AGI > >>> design into pseudo code (in the sense of programmable logic) and for > >>> programmers to be able to translate that into machine code. > >>> > >>> I agree with the frustration to specify what exactly would constitute > AGI > >>> at a logical and physical level. The knowhow you are referring to in > >>> terms > >>> of which knowledge schema to use is most valid. Further, your point on > >>> the > >>> physical constraints of computing platforms is generally well noted > >>> internationally. Obvious room for improvement. > >>> > >>> However, technically, it is now possible for a workable > hardware/software > >>> platform to be assembled to test AGI components with. Further, new > >>> programming tool(s) exist to code AGI logic with. > >>> > >>> Practically, the AGI logic is missing. It is this logic, which I assert > >>> to > >>> be available in a distributed form throughout the world. Irrespective > if > >>> one considers this from a programming or logic perspective, the pseudo > >>> code > >>> still has to be written, coded, and tested. > >>> > >>> We have reached a tangible point in AGI, which is: "Show us the pseudo > >>> code." And the response to that: "Pseudo code for what? " should become > >>> most relevant. It is that "what", which would ultimately define AGI. > >>> > >>> Let me ask it in this way then: "Is there somewhere int he world > today, a > >>> center or institution, where the passionate few could go to in order to > >>> collaborative specify this pseudo code for a version of AGI, where > >>> programmers and tools and a test platform is made ready to test this > >>> logic? > >>> I am not aware of such a place. > >>> > >>> Should such pseudo code be written for free, programmed for free, and > >>> tested for free? Never. Someone has to fund it, and fund it properly. > >>> > >>> Unless, we pitted our design and programming competencies against AGI > >>> (which is the challenge before us) within a suitable SDLC, we would not > >>> know whether or not yours, mine, or anyone else's version, or > >>> collaborative > >>> versions, of approaching AGI would ever work. I am not smart enough to > >>> program this logic, but I may have been smart enough to co-write the > >>> pseudo > >>> code. > >>> > >>> In the absence of the collaborative laboratory, would we ever know? If > >>> only you were proven correct, this AGI question may be put to bed. > >>> > >>> Closing with a lingering afterthought; If all intelligence was > relative, > >>> surely all intelligence must be probable. > >>> > >>> > >>> > >>> > >>> ------------------------------ > >>> *From:* Jim Bromer <[email protected]> > >>> *Sent:* 11 April 2017 11:58 AM > >>> *To:* AGI > >>> *Subject:* Re: [agi] I Still Do Not Believe That Probability Is a Good > >>> Basis for AGI > >>> > >>> Co operation is impossible because people have different ideas about > how > >>> it should be done and as problems are noticed (management for example), > >>> the > >>> tasks that need to be done become diversified in a non-focused way. So > we > >>> are now talking about managing people. I could turn this back to the > >>> essence of what we were talking about before by mentioning the > programmed > >>> management that would be needed for a complicated AGI program. I think > >>> relatively simple guidelines about abstractions could be easily > >>> automated. > >>> So if my theory about abstraction is valid, then they could lead to > some > >>> simple programming design that would incorporate them. But the problem > is > >>> that the design I have in mind would not (for example) run as a neural > >>> network. Continuing with refocusing your ideas about management back > onto > >>> a > >>> discussion about programming AGI (as if you were subconsciously talking > >>> about programming rather than managing programmers) I would point out > >>> that > >>> most AGI paradigms do not produce results that can be efficiently used > by > >>> competing paradigms. So there would be a serious management issue > there. > >>> For example a neural network cannot be examined (by the program) in > order > >>> (for the program) to determine what abstractions it had used to come > to a > >>> conclusion. A weighted graph (a probability network) should be better > at > >>> this but here the problem is that the stages of the process have to be > >>> saved in order for an advancement like this to work. The efficiency the > >>> method would then be lost because it would become memory exhaustive. > If a > >>> system incorporated (more) discrete abstractions a trace of a decision > >>> process could be made based on the abstracting principles that were > >>> discovered to be useful to examine the process. (This is a function of > >>> meta-analysis or meta-awareness.) > >>> > >>> Management of people is largely based on a predetermination that some > >>> focused goals are reasonable. Even if creativity is emphasized, the > push > >>> is > >>> to creatively solve the narrow tasks that you are assigned. As the > >>> workers are given more autonomy to reach for a relatively more general > >>> goal, the coordination of the methodologies and goals will be lost. > >>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > >>> <https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc> | > >>> Modify <https://www.listbox.com/member/?&> Your Subscription > >>> <http://www.listbox.com> > >>> > >>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > >>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> | > >>> Modify > >>> <https://www.listbox.com/member/?&> > >>> Your Subscription <http://www.listbox.com> > >>> > >> > >> > >> > >> ------------------------------------------- > >> AGI > >> Archives: https://www.listbox.com/member/archive/303/=now > >> RSS Feed: https://www.listbox.com/member/archive/rss/303/ > 11943661-d9279dae > >> Modify Your Subscription: > >> https://www.listbox.com/member/?& > >> Powered by Listbox: http://www.listbox.com > >> > > > > > > ------------------------------------------- > > AGI > > Archives: https://www.listbox.com/member/archive/303/=now > > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > 19237892-5029d625 > > Modify Your Subscription: https://www.listbox.com/member/?& > > Powered by Listbox: http://www.listbox.com > > > > -- > Ben Goertzel, PhD > http://goertzel.org > > "I am God! I am nothing, I'm play, I am freedom, I am life. I am the > boundary, I am the peak." -- Alexander Scriabin > > > ------------------------------------------- > AGI > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/26973278-698fd9ee > Modify Your Subscription: https://www.listbox.com/ > member/?& > Powered by Listbox: http://www.listbox.com > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
