On 28 July 2010 23:09, Jan Klauck <[email protected]> wrote: > Ian Parker wrote > > >> "If we program a machine for winning a war, we must think well what > >> we mean by winning." > > > > I wasn't thinking about winning a war, I was much more thinking about > > sexual morality and men kissing. > > If we program a machine for doing X, we must think well what we mean > by X. > > Now clearer? > > > "Winning" a war is achieving your political objectives in the war. Simple > > definition. > > Then define your political objectives. No holes, no ambiguity, no > forgotten cases. Or does the AGI ask for our feedback during mission? > If yes, down to what detail? >
With Matt's ideas it does exactly that. > > > The axioms which we cannot prove > > should be listed. You can't prove them. Let's list them and all the > > assumptions. > > And then what? Cripple the AGI by applying just those theorems we can > prove? That excludes of course all those we're uncertain about. And > it's not so much a single theorem that's problematic but a system of > axioms and inference rules that changes its properties when you > modify it or that is incomplete from the beginning. > No we simply add to the axiom pool. *All* I am saying is that we must always have a lemma train taking us to the most fundamental Suppose I say W=AσT4 Now I ask the system to prove this. At the bottom of the lemma trail will be Clifford algebra. This relates Bose Einstein statistics to the spin, in this case of the photon. It is Quantum Mechanics at a very fundamental level. A Fermion has a half in its spin. I can introduce as many axioms as I want. I can say that i = √-1. I can call this statement an axiom, as a counter example of your natural numbers. In constructing Clifford Algebra I make a number of statements. This thinking in terms of axioms I repeat does not limit the power of AGI. If we have a database you could almost say that a lemma trail was in essence trivial. What is does do is invalidate the biological model. *An absolute requirement for AGI is openness.* In other words we must be able to examine the arguments and their validity. > > Example (very plain just to make it clearer what I'm talking about): > > The natural numbers N are closed against addition. But N is not > closed against subtraction, since n - m < 0 where m > n. > > You can prove the theorem that subtracting a positive number from > another number decreases it: > > http://us2.metamath.org:88/mpegif/ltsubpos.html > > but you can still have a formal system that runs into problems. > In the case of N it's missing closedness, i.e., undefined area. > Now transfer this simple example to formal systems in general. > You have to prove every formal system as it is, not just a single > theorem. The behavior of an AGI isn't a single theorem but a system. > > > The heuristics could be tested in an off line system. > > Exactly. But by definition heuristics are incomplete, their solution > space is smaller than the set of all solutions. No guarantee for the > optimal solution, just probabilities < 1, elaborated hints. > > >>> Unselfishness going wrong is in fact a frightening thought. It would > >>> in > >>> AGI be a symptom of incompatible axioms. > >> > >> Which can happen in a complex system. > > > > Only if the definitions are vague. > > I bet against this. > > > Better to have a system based on "*democracy*" in some form or other. > > The rules you mention are goals and constraints. But they are heuristics > you check during runtime. > That is true. Also see above. System cannot be inscruitable. - Ian Parker > > > > ------------------------------------------- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com > ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com
