Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction
Abram, Solomoff Induction would produce poor predictions if it could be used to compute them. Secondly, since it cannot be computed it is useless. Third, it is not the sort of thing that is useful for AGI in the first place. You could experiment with finite possible ways to produce a string and see how useful the idea is, both as an abstraction and as an actual AGI tool. Have you tried this? An example is a word program that complete a word as you are typing. As far as Matt's complaint. I haven't yet been able to find a way that could be used to prove that Solomonoff Induction does not do what Matt claims it does, but I have yet to see an explanation of a proof that it does. When you are dealing with unverifiable pseudo-abstractions you are dealing with something that cannot be proven. All we can work on is whether or not the idea seems to make sense as an abstraction. As I said, the starting point would be to develop simpler problems and see how they behave as you build up more complicated problems. Jim On Thu, Jul 8, 2010 at 5:15 PM, Abram Demski abramdem...@gmail.com wrote: Yes, Jim, you seem to be mixing arguments here. I cannot tell which of the following you intend: 1) Solomonoff induction is useless because it would produce very bad predictions if we could compute them. 2) Solomonoff induction is useless because we can't compute its predictions. Are you trying to reject #1 and assert #2, reject #2 and assert #1, or assert both #1 and #2? Or some third statement? --Abram On Wed, Jul 7, 2010 at 7:09 PM, Matt Mahoney matmaho...@yahoo.com wrote: Who is talking about efficiency? An infinite sequence of uncomputable values is still just as uncomputable. I don't disagree that AIXI and Solomonoff induction are not computable. But you are also arguing that they are wrong. -- Matt Mahoney, matmaho...@yahoo.com -- *From:* Jim Bromer jimbro...@gmail.com *To:* agi agi@v2.listbox.com *Sent:* Wed, July 7, 2010 6:40:52 PM *Subject:* Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction Matt, But you are still saying that Solomonoff Induction has to be recomputed for each possible combination of bit value aren't you? Although this doesn't matter when you are dealing with infinite computations in the first place, it does matter when you are wondering if this has anything to do with AGI and compression efficiencies. Jim Bromer On Wed, Jul 7, 2010 at 5:44 PM, Matt Mahoney matmaho...@yahoo.comwrote: Jim Bromer wrote: But, a more interesting question is, given that the first digits are 000, what are the chances that the next digit will be 1? Dim Induction will report .5, which of course is nonsense and a whole less useful than making a rough guess. Wrong. The probability of a 1 is p(0001)/(p()+p(0001)) where the probabilities are computed using Solomonoff induction. A program that outputs will be shorter in most languages than a program that outputs 0001, so 0 is the most likely next bit. More generally, probability and prediction are equivalent by the chain rule. Given any 2 strings x followed by y, the prediction p(y|x) = p(xy)/p(x). -- Matt Mahoney, matmaho...@yahoo.com -- *From:* Jim Bromer jimbro...@gmail.com *To:* agi agi@v2.listbox.com *Sent:* Wed, July 7, 2010 10:10:37 AM *Subject:* [agi] Solomonoff Induction is Not Universal and Probability is not Prediction Suppose you have sets of programs that produce two strings. One set of outputs is 00 and the other is 11. Now suppose you used these sets of programs to chart the probabilities of the output of the strings. If the two strings were each output by the same number of programs then you'd have a .5 probability that either string would be output. That's ok. But, a more interesting question is, given that the first digits are 000, what are the chances that the next digit will be 1? Dim Induction will report .5, which of course is nonsense and a whole less useful than making a rough guess. But, of course, Solomonoff Induction purports to be able, if it was feasible, to compute the possibilities for all possible programs. Ok, but now, try thinking about this a little bit. If you have ever tried writing random program instructions what do you usually get? Well, I'll take a hazard and guess (a lot better than the bogus method of confusing shallow probability with prediction in my example) and say that you will get a lot of programs that crash. Well, most of my experiments with that have ended up with programs that go into an infinite loop or which crash. Now on a universal Turing machine, the results would probably look a little different. Some strings will output nothing and go into an infinite loop. Some programs will output something and then either stop outputting anything or start outputting an infinite loop of the same substring. Other
Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction
On Fri, Jul 9, 2010 at 7:49 AM, Jim Bromer jimbro...@gmail.com wrote: Abram, Solomoff Induction would produce poor predictions if it could be used to compute them. Solomonoff induction is a mathematical, not verbal, construct. Based on the most obvious mapping from the verbal terms you've used above into mathematical definitions in terms of which Solomonoff induction is constructed, the above statement of yours is FALSE. If you're going to argue against a mathematical theorem, your argument must be mathematical not verbal. Please explain one of 1) which step in the proof about Solomonoff induction's effectiveness you believe is in error 2) which of the assumptions of this proof you think is inapplicable to real intelligence [apart from the assumption of infinite or massive compute resources] Otherwise, your statement is in the same category as the statement by the protagonist of Dostoesvky's Notes from the Underground -- I admit that two times two makes four is an excellent thing, but if we are to give everything its due, two times two makes five is sometimes a very charming thing too. ;-) Secondly, since it cannot be computed it is useless. Third, it is not the sort of thing that is useful for AGI in the first place. I agree with these two statements -- ben G --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com
Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction
Ben Goertzel wrote: Secondly, since it cannot be computed it is useless. Third, it is not the sort of thing that is useful for AGI in the first place. I agree with these two statements The principle of Solomonoff induction can be applied to computable subsets of the (infinite) hypothesis space. For example, if you are using a neural network to make predictions, the principle says to use the smallest network that computes the past training data. -- Matt Mahoney, matmaho...@yahoo.com From: Ben Goertzel b...@goertzel.org To: agi agi@v2.listbox.com Sent: Fri, July 9, 2010 7:56:53 AM Subject: Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction On Fri, Jul 9, 2010 at 7:49 AM, Jim Bromer jimbro...@gmail.com wrote: Abram, Solomoff Induction would produce poor predictions if it could be used to compute them. Solomonoff induction is a mathematical, not verbal, construct. Based on the most obvious mapping from the verbal terms you've used above into mathematical definitions in terms of which Solomonoff induction is constructed, the above statement of yours is FALSE. If you're going to argue against a mathematical theorem, your argument must be mathematical not verbal. Please explain one of 1) which step in the proof about Solomonoff induction's effectiveness you believe is in error 2) which of the assumptions of this proof you think is inapplicable to real intelligence [apart from the assumption of infinite or massive compute resources] Otherwise, your statement is in the same category as the statement by the protagonist of Dostoesvky's Notes from the Underground -- I admit that two times two makes four is an excellent thing, but if we are to give everything its due, two times two makes five is sometimes a very charming thing too. ;-) Secondly, since it cannot be computed it is useless. Third, it is not the sort of thing that is useful for AGI in the first place. I agree with these two statements -- ben G agi | Archives | Modify Your Subscription --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com
Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction
On Fri, Jul 9, 2010 at 8:38 AM, Matt Mahoney matmaho...@yahoo.com wrote: Ben Goertzel wrote: Secondly, since it cannot be computed it is useless. Third, it is not the sort of thing that is useful for AGI in the first place. I agree with these two statements The principle of Solomonoff induction can be applied to computable subsets of the (infinite) hypothesis space. For example, if you are using a neural network to make predictions, the principle says to use the smallest network that computes the past training data. Yes, of course various versions of Occam's Razor are useful in practice, and we use an Occam bias in MOSES inside OpenCog for example But as you know, these are not exactly the same as Solomonoff Induction, though they're based on the same idea... -- Ben --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com
Re: [agi] Re: Huge Progress on the Core of AGI
Mike, On Thu, Jul 8, 2010 at 6:52 PM, Mike Tintner tint...@blueyonder.co.ukwrote: Isn't the first problem simply to differentiate the objects in a scene? Well, that is part of the movement problem. If you say something moved, you are also saying that the objects in the two or more video frames are the same instance. (Maybe the most important movement to begin with is not the movement of the object, but of the viewer changing their POV if only slightly - wh. won't be a factor if you're looking at a screen) Maybe, but this problem becomes kind of trivial in a 2D environment, assuming you don't allow rotation of the POV. Moving the POV would simply translate all the objects linearly. If you make it a 3D environment, it becomes significantly more complicated. I could work on 3D, which I will, but I'm not sure I should start there. I probably should consider it though and see what complications it adds to the problem and how they might be solved. And that I presume comes down to being able to put a crude, highly tentative, and fluid outline round them (something that won't be neces. if you're dealing with squares?) . Without knowing v. little if anything about what kind of objects they are. As an infant most likely does. {See infants' drawings and how they evolve v. gradually from a v. crude outline blob that at first can represent anything - that I'm suggesting is a replay of how visual perception developed). The fluid outline or image schema is arguably the basis of all intelligence - just about everything AGI is based on it. You need an outline for instance not just of objects, but of where you're going, and what you're going to try and do - if you want to survive in the real world. Schemas connect everything AGI. And it's not a matter of choice - first you have to have an outline/sense of the whole - whatever it is - before you can start filling in the parts. Well, this is the question. The solution is underdetermined, which means that a right solution is not possible to know with complete certainty. So, you may take the approach of using contours to match objects, but that is certainly not the only way to approach the problem. Yes, you have to use local features in the image to group pixels together in some way. I agree with you there. Is using contours the right way? Maybe, but not by itself. You have to define the problem a little better than just saying that we need to construct an outline. The real problem/question is this: How do you determine the uncertainty of a hypothesis, lower it and also determine how good a hypothesis is, especially in comparison to other hypotheses? So, in this case, we are trying to use an outline comparison to determine the best match hypotheses between objects. But, that doesn't define how you score alternative hypotheses. That also is certainly not the only way to do it. You could use the details within the outline too. In fact, in some situations, this would be required to disambiguate between the possible hypotheses. P.S. It would be mindblowingly foolish BTW to think you can do better than the way an infant learns to see - that's an awfully big visual section of the brain there, and it works. I'm not trying to do better than the human brain. I am trying to solve the same problems that the brain solves in a different way, sometimes better than the brain, sometimes worse, sometimes equivalently. What would be foolish is to assume the only way to duplicate general intelligence is to copy the human brain. By taking this approach, you are forced to reverse engineer and understand something that is extremely difficult to reverse engineer. In addition, a solution that using the brain's design may not be economically feasible. So, approaching the problem by copying the human brain has additional risks. You may end up figuring out how the brain works and not be able to use it. In addition might not end up with a good understanding of what other solutions might be possible. Dave --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com
Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction
On Fri, Jul 9, 2010 at 7:56 AM, Ben Goertzel b...@goertzel.org wrote: If you're going to argue against a mathematical theorem, your argument must be mathematical not verbal. Please explain one of 1) which step in the proof about Solomonoff induction's effectiveness you believe is in error 2) which of the assumptions of this proof you think is inapplicable to real intelligence [apart from the assumption of infinite or massive compute resources] Solomonoff Induction is not a provable Theorem, it is therefore a conjecture. It cannot be computed, it cannot be verified. There are many mathematical theorems that require the use of limits to prove them for example, and I accept those proofs. (Some people might not.) But there is no evidence that Solmonoff Induction would tend toward some limits. Now maybe the conjectured abstraction can be verified through some other means, but I have yet to see an adequate explanation of that in any terms. The idea that I have to answer your challenges using only the terms you specify is noise. Look at 2. What does that say about your Theorem. I am working on 1 but I just said: I haven't yet been able to find a way that could be used to prove that Solomonoff Induction does not do what Matt claims it does. Z What is not clear is that no one has objected to my characterization of the conjecture as I have been able to work it out for myself. It requires an infinite set of infinitely computed probabilities of each infinite string. If this characterization is correct, then Matt has been using the term string ambiguously. As a primary sample space: A particular string. And as a compound sample space: All the possible individual cases of the substring compounded into one. No one has yet to tell of his mathematical experiments of using a Turing simulator to see what a finite iteration of all possible programs of a given length would actually look like. I will finish this later. On Fri, Jul 9, 2010 at 7:49 AM, Jim Bromer jimbro...@gmail.com wrote: Abram, Solomoff Induction would produce poor predictions if it could be used to compute them. Solomonoff induction is a mathematical, not verbal, construct. Based on the most obvious mapping from the verbal terms you've used above into mathematical definitions in terms of which Solomonoff induction is constructed, the above statement of yours is FALSE. If you're going to argue against a mathematical theorem, your argument must be mathematical not verbal. Please explain one of 1) which step in the proof about Solomonoff induction's effectiveness you believe is in error 2) which of the assumptions of this proof you think is inapplicable to real intelligence [apart from the assumption of infinite or massive compute resources] Otherwise, your statement is in the same category as the statement by the protagonist of Dostoesvky's Notes from the Underground -- I admit that two times two makes four is an excellent thing, but if we are to give everything its due, two times two makes five is sometimes a very charming thing too. ;-) Secondly, since it cannot be computed it is useless. Third, it is not the sort of thing that is useful for AGI in the first place. I agree with these two statements -- ben G *agi* | Archives https://www.listbox.com/member/archive/303/=now https://www.listbox.com/member/archive/rss/303/ | Modifyhttps://www.listbox.com/member/?;Your Subscription http://www.listbox.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com
Re: [agi] Re: Huge Progress on the Core of AGI
Couple of quick comments (I'm still thinking about all this - but I'm confident everything AGI links up here). A fluid schema is arguably by its v. nature a method - a trial and error, arguably universal method. It links vision to the hand or any effector. Handling objects also is based on fluid schemas - you put out a fluid adjustably-shaped hand to grasp things. And even if you don't have hands, like a worm, and must grasp things with your body, and must grasp the ground under which you move, then too you must use fluid body schemas/maps. All concepts - the basis of language and before language, all intelligence - are also almost certainly fluid schemas (and not as you suggested, patterns). All creative problemsolving begins from concepts of what you want to do (and not formulae or algorithms as in rational problemsolving). Any suggestion to the contrary will not, I suggest, bear the slightest serious examination. **Fluid schemas/concepts/fluid outlines are attempts-to-grasp-things - gropings.** Point 2 : I'd relook at your assumptions in all your musings - my impression is they all assume, unwittingly, an *adult* POV - the view of s.o. who already knows how to see - as distinct from an infant who is just learning to see and get to grips with an extremely blurred world, (even more blurred and confusing, I wouldn't be surprised, than that Prakash video). You're unwittingly employing top down, fully-formed-intelligence assumptions even while overtly trying to produce a learning system - you're looking for what an adult wants to know, rather than what an infant starting-from-almost-no-knowledge-of-the-world wants to know. If you accept the point in any way, major philosophical rethinking is required. From: David Jones Sent: Friday, July 09, 2010 1:56 PM To: agi Subject: Re: [agi] Re: Huge Progress on the Core of AGI Mike, On Thu, Jul 8, 2010 at 6:52 PM, Mike Tintner tint...@blueyonder.co.uk wrote: Isn't the first problem simply to differentiate the objects in a scene? Well, that is part of the movement problem. If you say something moved, you are also saying that the objects in the two or more video frames are the same instance. (Maybe the most important movement to begin with is not the movement of the object, but of the viewer changing their POV if only slightly - wh. won't be a factor if you're looking at a screen) Maybe, but this problem becomes kind of trivial in a 2D environment, assuming you don't allow rotation of the POV. Moving the POV would simply translate all the objects linearly. If you make it a 3D environment, it becomes significantly more complicated. I could work on 3D, which I will, but I'm not sure I should start there. I probably should consider it though and see what complications it adds to the problem and how they might be solved. And that I presume comes down to being able to put a crude, highly tentative, and fluid outline round them (something that won't be neces. if you're dealing with squares?) . Without knowing v. little if anything about what kind of objects they are. As an infant most likely does. {See infants' drawings and how they evolve v. gradually from a v. crude outline blob that at first can represent anything - that I'm suggesting is a replay of how visual perception developed). The fluid outline or image schema is arguably the basis of all intelligence - just about everything AGI is based on it. You need an outline for instance not just of objects, but of where you're going, and what you're going to try and do - if you want to survive in the real world. Schemas connect everything AGI. And it's not a matter of choice - first you have to have an outline/sense of the whole - whatever it is - before you can start filling in the parts. Well, this is the question. The solution is underdetermined, which means that a right solution is not possible to know with complete certainty. So, you may take the approach of using contours to match objects, but that is certainly not the only way to approach the problem. Yes, you have to use local features in the image to group pixels together in some way. I agree with you there. Is using contours the right way? Maybe, but not by itself. You have to define the problem a little better than just saying that we need to construct an outline. The real problem/question is this: How do you determine the uncertainty of a hypothesis, lower it and also determine how good a hypothesis is, especially in comparison to other hypotheses? So, in this case, we are trying to use an outline comparison to determine the best match hypotheses between objects. But, that doesn't define how you score alternative hypotheses. That also is certainly not the only way to do it. You could use the details within the outline too. In fact, in some situations, this would be required to disambiguate between the possible hypotheses. P.S. It would be
Re: [agi] Re: Huge Progress on the Core of AGI
On Fri, Jul 9, 2010 at 10:04 AM, Mike Tintner tint...@blueyonder.co.ukwrote: Couple of quick comments (I'm still thinking about all this - but I'm confident everything AGI links up here). A fluid schema is arguably by its v. nature a method - a trial and error, arguably universal method. It links vision to the hand or any effector. Handling objects also is based on fluid schemas - you put out a fluid adjustably-shaped hand to grasp things. And even if you don't have hands, like a worm, and must grasp things with your body, and must grasp the ground under which you move, then too you must use fluid body schemas/maps. All concepts - the basis of language and before language, all intelligence - are also almost certainly fluid schemas (and not as you suggested, patterns). fluid schemas is not an actual algorithm. It is not clear how to go about implementing such a design. Even so, when you get into the details of actually implementing it, you will find yourself faced with the exact same problems I'm trying to solve. So, lets say you take the first frame and generate an initial fluid schema. What if an object disappears? What if the object changes? What if the object moves a little or a lot? What if a large number of changes occur at once, like one new thing suddenly blocking a bunch of similar stuff that is behind it? How far does your fluid schema have to be distorted for the algorithm to realize that it needs a new schema and can't use the same old one? You can't just say that all objects are always present and just distort the schema. What if two similar objects appear or both move and one disappears? How does your schema handle this? Regardless of whether you talk about hypotheses or schemas, it is the SAME problem. You can't avoid the fact that the whole thing is underdetermined and you need a way to score and compare hypotheses. If you disagree, please define your schema algorithm a bit more specifically. Then we would be able to analyze its pros and cons better. All creative problemsolving begins from concepts of what you want to do (and not formulae or algorithms as in rational problemsolving). Any suggestion to the contrary will not, I suggest, bear the slightest serious examination. Sure. I would point out though that children do stuff just to learn in the beginning. A good example is our desire to play. Playing is a strategy by which children learn new things even though they don't have a need for those things yet. It motivates us to learn for the future and not for any pressing present needs. No matter how you look at it, you will need algorithms for general intelligence. To say otherwise makes zero sense. No algorithms, no design. No matter what design you come up with, I call that an algorithm. Algorithms don't have to be formulaic or narrow. Keep an open mind about the world algorithm, unless you can suggest a better term to describe general AI algorithms. **Fluid schemas/concepts/fluid outlines are attempts-to-grasp-things - gropings.** Point 2 : I'd relook at your assumptions in all your musings - my impression is they all assume, unwittingly, an *adult* POV - the view of s.o. who already knows how to see - as distinct from an infant who is just learning to see and get to grips with an extremely blurred world, (even more blurred and confusing, I wouldn't be surprised, than that Prakash video). You're unwittingly employing top down, fully-formed-intelligence assumptions even while overtly trying to produce a learning system - you're looking for what an adult wants to know, rather than what an infant starting-from-almost-no-knowledge-of-the-world wants to know. If you accept the point in any way, major philosophical rethinking is required. this point doesn't really define at all how the approach should be changed or what approach to take. So, it doesn't change the way I approach the problem. You would really have to be more specific. For example, you could say that the infant doesn't even know how to group pixels, so it has to automatically learn that. I would have to disagree with this approach because I can't think of any reasonable algorithms that could reasonably explore possibilities. It doesn't seem better to me to describe the problem even more generally to the point where you are learning how to learn. This is what Abram was suggesting. But, as I said to him, you need a way to suggest and search for possible learning methods and then compare them. There doesn't seem to be a way to do this effectively. And so, you shouldn't over generalize in this way. As I said in the initial email(this week), there is no such thing as perfectly general and a silver bullet for solving any problem. So, I believe that even infants are born expecting what the world will be like. They aren't able to learn about any world. They are optimized to configure their brains for this world. *From:* David Jones davidher...@gmail.com *Sent:* Friday, July 09,
Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction
To make this discussion more concrete, please look at http://www.vetta.org/documents/disSol.pdf Section 2.5 gives a simple version of the proof that Solomonoff induction is a powerful learning algorithm in principle, and Section 2.6 explains why it is not practically useful. What part of that paper do you think is wrong? thx ben On Fri, Jul 9, 2010 at 9:54 AM, Jim Bromer jimbro...@gmail.com wrote: On Fri, Jul 9, 2010 at 7:56 AM, Ben Goertzel b...@goertzel.org wrote: If you're going to argue against a mathematical theorem, your argument must be mathematical not verbal. Please explain one of 1) which step in the proof about Solomonoff induction's effectiveness you believe is in error 2) which of the assumptions of this proof you think is inapplicable to real intelligence [apart from the assumption of infinite or massive compute resources] Solomonoff Induction is not a provable Theorem, it is therefore a conjecture. It cannot be computed, it cannot be verified. There are many mathematical theorems that require the use of limits to prove them for example, and I accept those proofs. (Some people might not.) But there is no evidence that Solmonoff Induction would tend toward some limits. Now maybe the conjectured abstraction can be verified through some other means, but I have yet to see an adequate explanation of that in any terms. The idea that I have to answer your challenges using only the terms you specify is noise. Look at 2. What does that say about your Theorem. I am working on 1 but I just said: I haven't yet been able to find a way that could be used to prove that Solomonoff Induction does not do what Matt claims it does. Z What is not clear is that no one has objected to my characterization of the conjecture as I have been able to work it out for myself. It requires an infinite set of infinitely computed probabilities of each infinite string. If this characterization is correct, then Matt has been using the term string ambiguously. As a primary sample space: A particular string. And as a compound sample space: All the possible individual cases of the substring compounded into one. No one has yet to tell of his mathematical experiments of using a Turing simulator to see what a finite iteration of all possible programs of a given length would actually look like. I will finish this later. On Fri, Jul 9, 2010 at 7:49 AM, Jim Bromer jimbro...@gmail.com wrote: Abram, Solomoff Induction would produce poor predictions if it could be used to compute them. Solomonoff induction is a mathematical, not verbal, construct. Based on the most obvious mapping from the verbal terms you've used above into mathematical definitions in terms of which Solomonoff induction is constructed, the above statement of yours is FALSE. If you're going to argue against a mathematical theorem, your argument must be mathematical not verbal. Please explain one of 1) which step in the proof about Solomonoff induction's effectiveness you believe is in error 2) which of the assumptions of this proof you think is inapplicable to real intelligence [apart from the assumption of infinite or massive compute resources] Otherwise, your statement is in the same category as the statement by the protagonist of Dostoesvky's Notes from the Underground -- I admit that two times two makes four is an excellent thing, but if we are to give everything its due, two times two makes five is sometimes a very charming thing too. ;-) Secondly, since it cannot be computed it is useless. Third, it is not the sort of thing that is useful for AGI in the first place. I agree with these two statements -- ben G *agi* | Archives https://www.listbox.com/member/archive/303/=now https://www.listbox.com/member/archive/rss/303/ | Modifyhttps://www.listbox.com/member/?;Your Subscription http://www.listbox.com/ *agi* | Archives https://www.listbox.com/member/archive/303/=now https://www.listbox.com/member/archive/rss/303/ | Modifyhttps://www.listbox.com/member/?;Your Subscription http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC CTO, Genescient Corp Vice Chairman, Humanity+ Advisor, Singularity University and Singularity Institute External Research Professor, Xiamen University, China b...@goertzel.org “When nothing seems to help, I go look at a stonecutter hammering away at his rock, perhaps a hundred times without as much as a crack showing in it. Yet at the hundred and first blow it will split in two, and I know it was not that blow that did it, but all that had gone before.” --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com
Re: [agi] Re: Huge Progress on the Core of AGI
If fluid schemas - speaking broadly - are what is needed, (and I'm pretty sure they are), it's n.g. trying for something else. You can't substitute a square approach for a fluid amoeba outline approach. (And you will certainly need exactly such an approach to recognize amoeba's). If it requires a new kind of machine, or a radically new kind of instruction set for computers, then that's what it requires - Stan Franklin, BTW, is one person who does recognize, and is trying to deal with this problem - might be worth checking up on him. This is partly BTW why my instinct is that it may be better to start with tasks for robot hands*, because it should be possible to get them to apply a relatively flexible and fluid grip/handshape and grope for and experiment with differently shaped objects And if you accept the broad philosophy I've been outlining, then it does make sense that evolution should have started with touch as a more primary sense, well before it got to vision. *Or perhaps it may prove better to start with robot snakes/bodies or somesuch. From: David Jones Sent: Friday, July 09, 2010 3:22 PM To: agi Subject: Re: [agi] Re: Huge Progress on the Core of AGI On Fri, Jul 9, 2010 at 10:04 AM, Mike Tintner tint...@blueyonder.co.uk wrote: Couple of quick comments (I'm still thinking about all this - but I'm confident everything AGI links up here). A fluid schema is arguably by its v. nature a method - a trial and error, arguably universal method. It links vision to the hand or any effector. Handling objects also is based on fluid schemas - you put out a fluid adjustably-shaped hand to grasp things. And even if you don't have hands, like a worm, and must grasp things with your body, and must grasp the ground under which you move, then too you must use fluid body schemas/maps. All concepts - the basis of language and before language, all intelligence - are also almost certainly fluid schemas (and not as you suggested, patterns). fluid schemas is not an actual algorithm. It is not clear how to go about implementing such a design. Even so, when you get into the details of actually implementing it, you will find yourself faced with the exact same problems I'm trying to solve. So, lets say you take the first frame and generate an initial fluid schema. What if an object disappears? What if the object changes? What if the object moves a little or a lot? What if a large number of changes occur at once, like one new thing suddenly blocking a bunch of similar stuff that is behind it? How far does your fluid schema have to be distorted for the algorithm to realize that it needs a new schema and can't use the same old one? You can't just say that all objects are always present and just distort the schema. What if two similar objects appear or both move and one disappears? How does your schema handle this? Regardless of whether you talk about hypotheses or schemas, it is the SAME problem. You can't avoid the fact that the whole thing is underdetermined and you need a way to score and compare hypotheses. If you disagree, please define your schema algorithm a bit more specifically. Then we would be able to analyze its pros and cons better. All creative problemsolving begins from concepts of what you want to do (and not formulae or algorithms as in rational problemsolving). Any suggestion to the contrary will not, I suggest, bear the slightest serious examination. Sure. I would point out though that children do stuff just to learn in the beginning. A good example is our desire to play. Playing is a strategy by which children learn new things even though they don't have a need for those things yet. It motivates us to learn for the future and not for any pressing present needs. No matter how you look at it, you will need algorithms for general intelligence. To say otherwise makes zero sense. No algorithms, no design. No matter what design you come up with, I call that an algorithm. Algorithms don't have to be formulaic or narrow. Keep an open mind about the world algorithm, unless you can suggest a better term to describe general AI algorithms. **Fluid schemas/concepts/fluid outlines are attempts-to-grasp-things - gropings.** Point 2 : I'd relook at your assumptions in all your musings - my impression is they all assume, unwittingly, an *adult* POV - the view of s.o. who already knows how to see - as distinct from an infant who is just learning to see and get to grips with an extremely blurred world, (even more blurred and confusing, I wouldn't be surprised, than that Prakash video). You're unwittingly employing top down, fully-formed-intelligence assumptions even while overtly trying to produce a learning system - you're looking for what an adult wants to know, rather than what an infant starting-from-almost-no-knowledge-of-the-world wants to know. If you accept the point in any way, major philosophical rethinking is
Re: [agi] Re: Huge Progress on the Core of AGI
Mike, Please outline your algorithm for fluid schemas though. It will be clear when you do that you are faced with the exact same uncertainty problems I am dealing with and trying to solve. The problems are completely equivalent. Yours is just a specific approach that is not sufficiently defined. You have to define how you deal with uncertainty when using fluid schemas or even how to approach the task of figuring it out. Until then, its not a solution to anything. Dave On Fri, Jul 9, 2010 at 10:59 AM, Mike Tintner tint...@blueyonder.co.ukwrote: If fluid schemas - speaking broadly - are what is needed, (and I'm pretty sure they are), it's n.g. trying for something else. You can't substitute a square approach for a fluid amoeba outline approach. (And you will certainly need exactly such an approach to recognize amoeba's). If it requires a new kind of machine, or a radically new kind of instruction set for computers, then that's what it requires - Stan Franklin, BTW, is one person who does recognize, and is trying to deal with this problem - might be worth checking up on him. This is partly BTW why my instinct is that it may be better to start with tasks for robot hands*, because it should be possible to get them to apply a relatively flexible and fluid grip/handshape and grope for and experiment with differently shaped objects And if you accept the broad philosophy I've been outlining, then it does make sense that evolution should have started with touch as a more primary sense, well before it got to vision. *Or perhaps it may prove better to start with robot snakes/bodies or somesuch. *From:* David Jones davidher...@gmail.com *Sent:* Friday, July 09, 2010 3:22 PM *To:* agi agi@v2.listbox.com *Subject:* Re: [agi] Re: Huge Progress on the Core of AGI On Fri, Jul 9, 2010 at 10:04 AM, Mike Tintner tint...@blueyonder.co.ukwrote: Couple of quick comments (I'm still thinking about all this - but I'm confident everything AGI links up here). A fluid schema is arguably by its v. nature a method - a trial and error, arguably universal method. It links vision to the hand or any effector. Handling objects also is based on fluid schemas - you put out a fluid adjustably-shaped hand to grasp things. And even if you don't have hands, like a worm, and must grasp things with your body, and must grasp the ground under which you move, then too you must use fluid body schemas/maps. All concepts - the basis of language and before language, all intelligence - are also almost certainly fluid schemas (and not as you suggested, patterns). fluid schemas is not an actual algorithm. It is not clear how to go about implementing such a design. Even so, when you get into the details of actually implementing it, you will find yourself faced with the exact same problems I'm trying to solve. So, lets say you take the first frame and generate an initial fluid schema. What if an object disappears? What if the object changes? What if the object moves a little or a lot? What if a large number of changes occur at once, like one new thing suddenly blocking a bunch of similar stuff that is behind it? How far does your fluid schema have to be distorted for the algorithm to realize that it needs a new schema and can't use the same old one? You can't just say that all objects are always present and just distort the schema. What if two similar objects appear or both move and one disappears? How does your schema handle this? Regardless of whether you talk about hypotheses or schemas, it is the SAME problem. You can't avoid the fact that the whole thing is underdetermined and you need a way to score and compare hypotheses. If you disagree, please define your schema algorithm a bit more specifically. Then we would be able to analyze its pros and cons better. All creative problemsolving begins from concepts of what you want to do (and not formulae or algorithms as in rational problemsolving). Any suggestion to the contrary will not, I suggest, bear the slightest serious examination. Sure. I would point out though that children do stuff just to learn in the beginning. A good example is our desire to play. Playing is a strategy by which children learn new things even though they don't have a need for those things yet. It motivates us to learn for the future and not for any pressing present needs. No matter how you look at it, you will need algorithms for general intelligence. To say otherwise makes zero sense. No algorithms, no design. No matter what design you come up with, I call that an algorithm. Algorithms don't have to be formulaic or narrow. Keep an open mind about the world algorithm, unless you can suggest a better term to describe general AI algorithms. **Fluid schemas/concepts/fluid outlines are attempts-to-grasp-things - gropings.** Point 2 : I'd relook at your assumptions in all your musings - my impression is they all assume, unwittingly, an
Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction
Although I haven't studied Solomonoff induction yet, although I plan to read up on it, I've realized that people seem to be making the same mistake I was. People are trying to find one silver bullet method of induction or learning that works for everything. I've begun to realize that its OK if something doesn't work for everything. As long as it works on a large enough subset of problems to be useful. If you can figure out how to construct justifiable methods of induction for enough problems that you need to solve, then that is sufficient for AGI. This is the same mistake I made and it was the point I was trying to make in the recent email I sent. I kept trying to come up with algorithms for doing things and I could always find a test case to break it. So, now I've begun to realize that it's ok if it breaks sometimes! The question is, can you define an algorithm that breaks gracefully and which can figure out what problems it can be applied to and what problems it should not be applied to. If you can do that, then you can solve the problems where it is applicable, and avoid the problems where it is not. This is perfectly OK! You don't have to find a silver bullet method of induction or inference that works for everything! Dave On Fri, Jul 9, 2010 at 10:49 AM, Ben Goertzel b...@goertzel.org wrote: To make this discussion more concrete, please look at http://www.vetta.org/documents/disSol.pdf Section 2.5 gives a simple version of the proof that Solomonoff induction is a powerful learning algorithm in principle, and Section 2.6 explains why it is not practically useful. What part of that paper do you think is wrong? thx ben On Fri, Jul 9, 2010 at 9:54 AM, Jim Bromer jimbro...@gmail.com wrote: On Fri, Jul 9, 2010 at 7:56 AM, Ben Goertzel b...@goertzel.org wrote: If you're going to argue against a mathematical theorem, your argument must be mathematical not verbal. Please explain one of 1) which step in the proof about Solomonoff induction's effectiveness you believe is in error 2) which of the assumptions of this proof you think is inapplicable to real intelligence [apart from the assumption of infinite or massive compute resources] Solomonoff Induction is not a provable Theorem, it is therefore a conjecture. It cannot be computed, it cannot be verified. There are many mathematical theorems that require the use of limits to prove them for example, and I accept those proofs. (Some people might not.) But there is no evidence that Solmonoff Induction would tend toward some limits. Now maybe the conjectured abstraction can be verified through some other means, but I have yet to see an adequate explanation of that in any terms. The idea that I have to answer your challenges using only the terms you specify is noise. Look at 2. What does that say about your Theorem. I am working on 1 but I just said: I haven't yet been able to find a way that could be used to prove that Solomonoff Induction does not do what Matt claims it does. Z What is not clear is that no one has objected to my characterization of the conjecture as I have been able to work it out for myself. It requires an infinite set of infinitely computed probabilities of each infinite string. If this characterization is correct, then Matt has been using the term string ambiguously. As a primary sample space: A particular string. And as a compound sample space: All the possible individual cases of the substring compounded into one. No one has yet to tell of his mathematical experiments of using a Turing simulator to see what a finite iteration of all possible programs of a given length would actually look like. I will finish this later. On Fri, Jul 9, 2010 at 7:49 AM, Jim Bromer jimbro...@gmail.com wrote: Abram, Solomoff Induction would produce poor predictions if it could be used to compute them. Solomonoff induction is a mathematical, not verbal, construct. Based on the most obvious mapping from the verbal terms you've used above into mathematical definitions in terms of which Solomonoff induction is constructed, the above statement of yours is FALSE. If you're going to argue against a mathematical theorem, your argument must be mathematical not verbal. Please explain one of 1) which step in the proof about Solomonoff induction's effectiveness you believe is in error 2) which of the assumptions of this proof you think is inapplicable to real intelligence [apart from the assumption of infinite or massive compute resources] Otherwise, your statement is in the same category as the statement by the protagonist of Dostoesvky's Notes from the Underground -- I admit that two times two makes four is an excellent thing, but if we are to give everything its due, two times two makes five is sometimes a very charming thing too. ;-) Secondly, since it cannot be computed it is useless. Third, it is not the sort of
Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction
The same goes for inference. There is no silver bullet method that is completely general and can infer anything. There is no general inference method. Sometimes it works, sometimes it doesn't. That is the nature of the complex world we live in. My current theory is that the more we try to find a single silver bullet, the more we will just break against the fact that none exists. On Fri, Jul 9, 2010 at 11:35 AM, David Jones davidher...@gmail.com wrote: Although I haven't studied Solomonoff induction yet, although I plan to read up on it, I've realized that people seem to be making the same mistake I was. People are trying to find one silver bullet method of induction or learning that works for everything. I've begun to realize that its OK if something doesn't work for everything. As long as it works on a large enough subset of problems to be useful. If you can figure out how to construct justifiable methods of induction for enough problems that you need to solve, then that is sufficient for AGI. This is the same mistake I made and it was the point I was trying to make in the recent email I sent. I kept trying to come up with algorithms for doing things and I could always find a test case to break it. So, now I've begun to realize that it's ok if it breaks sometimes! The question is, can you define an algorithm that breaks gracefully and which can figure out what problems it can be applied to and what problems it should not be applied to. If you can do that, then you can solve the problems where it is applicable, and avoid the problems where it is not. This is perfectly OK! You don't have to find a silver bullet method of induction or inference that works for everything! Dave On Fri, Jul 9, 2010 at 10:49 AM, Ben Goertzel b...@goertzel.org wrote: To make this discussion more concrete, please look at http://www.vetta.org/documents/disSol.pdf Section 2.5 gives a simple version of the proof that Solomonoff induction is a powerful learning algorithm in principle, and Section 2.6 explains why it is not practically useful. What part of that paper do you think is wrong? thx ben On Fri, Jul 9, 2010 at 9:54 AM, Jim Bromer jimbro...@gmail.com wrote: On Fri, Jul 9, 2010 at 7:56 AM, Ben Goertzel b...@goertzel.org wrote: If you're going to argue against a mathematical theorem, your argument must be mathematical not verbal. Please explain one of 1) which step in the proof about Solomonoff induction's effectiveness you believe is in error 2) which of the assumptions of this proof you think is inapplicable to real intelligence [apart from the assumption of infinite or massive compute resources] Solomonoff Induction is not a provable Theorem, it is therefore a conjecture. It cannot be computed, it cannot be verified. There are many mathematical theorems that require the use of limits to prove them for example, and I accept those proofs. (Some people might not.) But there is no evidence that Solmonoff Induction would tend toward some limits. Now maybe the conjectured abstraction can be verified through some other means, but I have yet to see an adequate explanation of that in any terms. The idea that I have to answer your challenges using only the terms you specify is noise. Look at 2. What does that say about your Theorem. I am working on 1 but I just said: I haven't yet been able to find a way that could be used to prove that Solomonoff Induction does not do what Matt claims it does. Z What is not clear is that no one has objected to my characterization of the conjecture as I have been able to work it out for myself. It requires an infinite set of infinitely computed probabilities of each infinite string. If this characterization is correct, then Matt has been using the term string ambiguously. As a primary sample space: A particular string. And as a compound sample space: All the possible individual cases of the substring compounded into one. No one has yet to tell of his mathematical experiments of using a Turing simulator to see what a finite iteration of all possible programs of a given length would actually look like. I will finish this later. On Fri, Jul 9, 2010 at 7:49 AM, Jim Bromer jimbro...@gmail.comwrote: Abram, Solomoff Induction would produce poor predictions if it could be used to compute them. Solomonoff induction is a mathematical, not verbal, construct. Based on the most obvious mapping from the verbal terms you've used above into mathematical definitions in terms of which Solomonoff induction is constructed, the above statement of yours is FALSE. If you're going to argue against a mathematical theorem, your argument must be mathematical not verbal. Please explain one of 1) which step in the proof about Solomonoff induction's effectiveness you believe is in error 2) which of the assumptions of this proof you think is inapplicable to real
Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction
I don't think Solomonoff induction is a particularly useful direction for AI, I was just taking issue with the statement made that it is not capable of correct prediction given adequate resources... On Fri, Jul 9, 2010 at 11:35 AM, David Jones davidher...@gmail.com wrote: Although I haven't studied Solomonoff induction yet, although I plan to read up on it, I've realized that people seem to be making the same mistake I was. People are trying to find one silver bullet method of induction or learning that works for everything. I've begun to realize that its OK if something doesn't work for everything. As long as it works on a large enough subset of problems to be useful. If you can figure out how to construct justifiable methods of induction for enough problems that you need to solve, then that is sufficient for AGI. This is the same mistake I made and it was the point I was trying to make in the recent email I sent. I kept trying to come up with algorithms for doing things and I could always find a test case to break it. So, now I've begun to realize that it's ok if it breaks sometimes! The question is, can you define an algorithm that breaks gracefully and which can figure out what problems it can be applied to and what problems it should not be applied to. If you can do that, then you can solve the problems where it is applicable, and avoid the problems where it is not. This is perfectly OK! You don't have to find a silver bullet method of induction or inference that works for everything! Dave On Fri, Jul 9, 2010 at 10:49 AM, Ben Goertzel b...@goertzel.org wrote: To make this discussion more concrete, please look at http://www.vetta.org/documents/disSol.pdf Section 2.5 gives a simple version of the proof that Solomonoff induction is a powerful learning algorithm in principle, and Section 2.6 explains why it is not practically useful. What part of that paper do you think is wrong? thx ben On Fri, Jul 9, 2010 at 9:54 AM, Jim Bromer jimbro...@gmail.com wrote: On Fri, Jul 9, 2010 at 7:56 AM, Ben Goertzel b...@goertzel.org wrote: If you're going to argue against a mathematical theorem, your argument must be mathematical not verbal. Please explain one of 1) which step in the proof about Solomonoff induction's effectiveness you believe is in error 2) which of the assumptions of this proof you think is inapplicable to real intelligence [apart from the assumption of infinite or massive compute resources] Solomonoff Induction is not a provable Theorem, it is therefore a conjecture. It cannot be computed, it cannot be verified. There are many mathematical theorems that require the use of limits to prove them for example, and I accept those proofs. (Some people might not.) But there is no evidence that Solmonoff Induction would tend toward some limits. Now maybe the conjectured abstraction can be verified through some other means, but I have yet to see an adequate explanation of that in any terms. The idea that I have to answer your challenges using only the terms you specify is noise. Look at 2. What does that say about your Theorem. I am working on 1 but I just said: I haven't yet been able to find a way that could be used to prove that Solomonoff Induction does not do what Matt claims it does. Z What is not clear is that no one has objected to my characterization of the conjecture as I have been able to work it out for myself. It requires an infinite set of infinitely computed probabilities of each infinite string. If this characterization is correct, then Matt has been using the term string ambiguously. As a primary sample space: A particular string. And as a compound sample space: All the possible individual cases of the substring compounded into one. No one has yet to tell of his mathematical experiments of using a Turing simulator to see what a finite iteration of all possible programs of a given length would actually look like. I will finish this later. On Fri, Jul 9, 2010 at 7:49 AM, Jim Bromer jimbro...@gmail.comwrote: Abram, Solomoff Induction would produce poor predictions if it could be used to compute them. Solomonoff induction is a mathematical, not verbal, construct. Based on the most obvious mapping from the verbal terms you've used above into mathematical definitions in terms of which Solomonoff induction is constructed, the above statement of yours is FALSE. If you're going to argue against a mathematical theorem, your argument must be mathematical not verbal. Please explain one of 1) which step in the proof about Solomonoff induction's effectiveness you believe is in error 2) which of the assumptions of this proof you think is inapplicable to real intelligence [apart from the assumption of infinite or massive compute resources] Otherwise, your statement is in the same category as the statement by the protagonist of Dostoesvky's
Re: [agi] Re: Huge Progress on the Core of AGI
There isn't an algorithm. It's basically a matter of overlaying shapes to see if they fit - much as you put one hand against another to see if they fit - much as you can overlay a hand to see if it fits and is capable of grasping an object - except considerably more fluid/ rougher. There has to be some instruction generating the process, but it's not an algorithm. How can you have an algorithm for recognizing amoebas - or rocks or a drop of water? They are not patterned entities - or by extension reducible to algorithms. You don't need to think too much about internal visual processes - you can just look,at the external objects-to-be-classified , the objects that make up this world, and see this. Just as you can look at a set of diverse patterns and see that they too are not reducible to any single formula/pattern/algorithm. We're talking about the fundamental structure of the universe and its contents. If this is right and God is an artist before he is a mathematician, then it won't do any good screaming about it, you're going to have to invent a way to do art, so to speak, on computers . Or you can pretend that dealing with mathematical squares will somehow help here - but it hasn't and won't. Do you think that a creative process like creating http://www.apocalyptic-theories.com/gallery/lastjudge/bosch.jpg started with an algorithm? There are other ways of solving problems than algorithms - the person who created each algorithm in the first place certainly didn't have one. From: David Jones Sent: Friday, July 09, 2010 4:20 PM To: agi Subject: Re: [agi] Re: Huge Progress on the Core of AGI Mike, Please outline your algorithm for fluid schemas though. It will be clear when you do that you are faced with the exact same uncertainty problems I am dealing with and trying to solve. The problems are completely equivalent. Yours is just a specific approach that is not sufficiently defined. You have to define how you deal with uncertainty when using fluid schemas or even how to approach the task of figuring it out. Until then, its not a solution to anything. Dave On Fri, Jul 9, 2010 at 10:59 AM, Mike Tintner tint...@blueyonder.co.uk wrote: If fluid schemas - speaking broadly - are what is needed, (and I'm pretty sure they are), it's n.g. trying for something else. You can't substitute a square approach for a fluid amoeba outline approach. (And you will certainly need exactly such an approach to recognize amoeba's). If it requires a new kind of machine, or a radically new kind of instruction set for computers, then that's what it requires - Stan Franklin, BTW, is one person who does recognize, and is trying to deal with this problem - might be worth checking up on him. This is partly BTW why my instinct is that it may be better to start with tasks for robot hands*, because it should be possible to get them to apply a relatively flexible and fluid grip/handshape and grope for and experiment with differently shaped objects And if you accept the broad philosophy I've been outlining, then it does make sense that evolution should have started with touch as a more primary sense, well before it got to vision. *Or perhaps it may prove better to start with robot snakes/bodies or somesuch. From: David Jones Sent: Friday, July 09, 2010 3:22 PM To: agi Subject: Re: [agi] Re: Huge Progress on the Core of AGI On Fri, Jul 9, 2010 at 10:04 AM, Mike Tintner tint...@blueyonder.co.uk wrote: Couple of quick comments (I'm still thinking about all this - but I'm confident everything AGI links up here). A fluid schema is arguably by its v. nature a method - a trial and error, arguably universal method. It links vision to the hand or any effector. Handling objects also is based on fluid schemas - you put out a fluid adjustably-shaped hand to grasp things. And even if you don't have hands, like a worm, and must grasp things with your body, and must grasp the ground under which you move, then too you must use fluid body schemas/maps. All concepts - the basis of language and before language, all intelligence - are also almost certainly fluid schemas (and not as you suggested, patterns). fluid schemas is not an actual algorithm. It is not clear how to go about implementing such a design. Even so, when you get into the details of actually implementing it, you will find yourself faced with the exact same problems I'm trying to solve. So, lets say you take the first frame and generate an initial fluid schema. What if an object disappears? What if the object changes? What if the object moves a little or a lot? What if a large number of changes occur at once, like one new thing suddenly blocking a bunch of similar stuff that is behind it? How far does your fluid schema have to be distorted for the algorithm to realize that it needs a new schema and can't use the same old one? You can't just say that all objects are always
Re: [agi] Re: Huge Progress on the Core of AGI
The way I define algorithms encompasses just about any intelligently designed system. So, call it what you want. I really wish you would stop avoiding the word. But, fine. I'll play your word game... Define your system please. And justify why or how it handles uncertainty. You said overlay a hand to see if it fits. How do you define fits? The truth is that it will never fit perfectly, so how do you define a good fit and a bad one? You will find that you end up with the same exact problems I am working on. You keep avoiding the need to define the system of fluid schemas. You're avoiding it because it's not a solution to anything and you can't define it without realizing that your idea doesn't pan out. So, I dare you. Define your fluid schemas without revealing the fatal flaw in your reasoning. Dave On Fri, Jul 9, 2010 at 12:05 PM, Mike Tintner tint...@blueyonder.co.ukwrote: There isn't an algorithm. It's basically a matter of overlaying shapes to see if they fit - much as you put one hand against another to see if they fit - much as you can overlay a hand to see if it fits and is capable of grasping an object - except considerably more fluid/ rougher. There has to be some instruction generating the process, but it's not an algorithm. How can you have an algorithm for recognizing amoebas - or rocks or a drop of water? They are not patterned entities - or by extension reducible to algorithms. You don't need to think too much about internal visual processes - you can just look,at the external objects-to-be-classified , the objects that make up this world, and see this. Just as you can look at a set of diverse patterns and see that they too are not reducible to any single formula/pattern/algorithm. We're talking about the fundamental structure of the universe and its contents. If this is right and God is an artist before he is a mathematician, then it won't do any good screaming about it, you're going to have to invent a way to do art, so to speak, on computers . Or you can pretend that dealing with mathematical squares will somehow help here - but it hasn't and won't. Do you think that a creative process like creating http://www.apocalyptic-theories.com/gallery/lastjudge/bosch.jpg started with an algorithm? There are other ways of solving problems than algorithms - the person who created each algorithm in the first place certainly didn't have one. *From:* David Jones davidher...@gmail.com *Sent:* Friday, July 09, 2010 4:20 PM *To:* agi agi@v2.listbox.com *Subject:* Re: [agi] Re: Huge Progress on the Core of AGI Mike, Please outline your algorithm for fluid schemas though. It will be clear when you do that you are faced with the exact same uncertainty problems I am dealing with and trying to solve. The problems are completely equivalent. Yours is just a specific approach that is not sufficiently defined. You have to define how you deal with uncertainty when using fluid schemas or even how to approach the task of figuring it out. Until then, its not a solution to anything. Dave On Fri, Jul 9, 2010 at 10:59 AM, Mike Tintner tint...@blueyonder.co.ukwrote: If fluid schemas - speaking broadly - are what is needed, (and I'm pretty sure they are), it's n.g. trying for something else. You can't substitute a square approach for a fluid amoeba outline approach. (And you will certainly need exactly such an approach to recognize amoeba's). If it requires a new kind of machine, or a radically new kind of instruction set for computers, then that's what it requires - Stan Franklin, BTW, is one person who does recognize, and is trying to deal with this problem - might be worth checking up on him. This is partly BTW why my instinct is that it may be better to start with tasks for robot hands*, because it should be possible to get them to apply a relatively flexible and fluid grip/handshape and grope for and experiment with differently shaped objects And if you accept the broad philosophy I've been outlining, then it does make sense that evolution should have started with touch as a more primary sense, well before it got to vision. *Or perhaps it may prove better to start with robot snakes/bodies or somesuch. *From:* David Jones davidher...@gmail.com *Sent:* Friday, July 09, 2010 3:22 PM *To:* agi agi@v2.listbox.com *Subject:* Re: [agi] Re: Huge Progress on the Core of AGI On Fri, Jul 9, 2010 at 10:04 AM, Mike Tintner tint...@blueyonder.co.ukwrote: Couple of quick comments (I'm still thinking about all this - but I'm confident everything AGI links up here). A fluid schema is arguably by its v. nature a method - a trial and error, arguably universal method. It links vision to the hand or any effector. Handling objects also is based on fluid schemas - you put out a fluid adjustably-shaped hand to grasp things. And even if you don't have hands, like a worm, and must grasp things with your body, and must grasp the ground under
Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction
On Fri, Jul 9, 2010 at 11:37 AM, Ben Goertzel b...@goertzel.org wrote: I don't think Solomonoff induction is a particularly useful direction for AI, I was just taking issue with the statement made that it is not capable of correct prediction given adequate resources... Pi is not computable. It would take infinite resources to compute it. However, because Pi approaches a limit, the theory of limits can be used to show that it can be refined to any limit that is possible and since it consistently approaches a limit it can be used in general theorems that can be proven through induction. You can use *computed values* of pi in a general theorem as long as you can show that the usage is valid by using the theory of limits. I think I figured out a way, given infinite resources, to write a program that could compute Solomonoff Induction. However, since it cannot be shown (or at least I don't know anyone who has ever shown) that the probabilities approaches some value (or values) as a limit (or limits), this program (or a variation on this kind of program) could not be used to show that it can be: 1. computed to any specified degree of precision within some finite number of steps. 2. proven through the use of mathematical induction. The proof is based on the diagonal argument of Cantor, but it might be considered as variation of Cantor's diagonal argument. There can be no one to one *mapping of the computation to an usage* as the computation approaches infinity to make the values approach some limit of precision. For any computed values there is always a *possibility* (this is different than Cantor) that there are an infinite number of more precise values (of the probability of a string (primary sample space or compound sample space)) within any two iterations of the computational program (formula). So even though I cannot disprove what Solomonoff Induction might be given infinite resources, if this superficial analysis is right, without a way to compute the values so that they tend toward a limit for each of the probabilities needed, it is not a usable mathematical theorem. What uncomputable means is that any statement (most statements) drawn from it are matters of mathematical conjecture or opinion. It's like opinioning that the Godel sentence, given infinite resources, is decidable. I don't think the question of whether it is valid for infinite resources or not can be answered mathematically for the time being. And conclusions drawn from uncomputable results have to be considered dubious. However, it certainly leads to other questions which I think are more interesting and more useful. What is needed to promote greater insight about the problem of conditional probabilities in complicated situations where the probability emitters and the elementary sample space may be obscured by the use of complicated interactions and a preliminary focus on compound sample spaces? Are there theories, which like asking questions about the givens in a problem, that could lead toward a greater detection of the relation between the givens and the primary probability emitters and the primary sample space? Can a mathematical theory be based solely on abstract principles even though it is impossible to evaluate the use of those abstractions with examples from the particulars (of the abstractions)? How could those abstract principles be reliably defined so that they aren't too simplistic? Jim Bromer --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com
Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction
On Fri, Jul 9, 2010 at 1:12 PM, Jim Bromer jimbro...@gmail.com wrote: The proof is based on the diagonal argument of Cantor, but it might be considered as variation of Cantor's diagonal argument. There can be no one to one *mapping of the computation to an usage* as the computation approaches infinity to make the values approach some limit of precision. For any computed values there is always a *possibility* (this is different than Cantor) that there are an infinite number of more precise values (of the probability of a string (primary sample space or compound sample space)) within any two iterations of the computational program (formula). Ok, I didn't get that right, but there is enough there to get the idea. For any computed values there is always a *possibility* (I think this is different than Cantor) that there are an infinite number of more precise values (of the probability of a string (primary sample space or compound sample space)) that may fall outside the limits that could be derived from any finite sequence of iterations of the computational program (formula). On Fri, Jul 9, 2010 at 1:12 PM, Jim Bromer jimbro...@gmail.com wrote: On Fri, Jul 9, 2010 at 11:37 AM, Ben Goertzel b...@goertzel.org wrote: I don't think Solomonoff induction is a particularly useful direction for AI, I was just taking issue with the statement made that it is not capable of correct prediction given adequate resources... Pi is not computable. It would take infinite resources to compute it. However, because Pi approaches a limit, the theory of limits can be used to show that it can be refined to any limit that is possible and since it consistently approaches a limit it can be used in general theorems that can be proven through induction. You can use *computed values* of pi in a general theorem as long as you can show that the usage is valid by using the theory of limits. I think I figured out a way, given infinite resources, to write a program that could compute Solomonoff Induction. However, since it cannot be shown (or at least I don't know anyone who has ever shown) that the probabilities approaches some value (or values) as a limit (or limits), this program (or a variation on this kind of program) could not be used to show that it can be: 1. computed to any specified degree of precision within some finite number of steps. 2. proven through the use of mathematical induction. The proof is based on the diagonal argument of Cantor, but it might be considered as variation of Cantor's diagonal argument. There can be no one to one *mapping of the computation to an usage* as the computation approaches infinity to make the values approach some limit of precision. For any computed values there is always a *possibility* (this is different than Cantor) that there are an infinite number of more precise values (of the probability of a string (primary sample space or compound sample space)) within any two iterations of the computational program (formula). So even though I cannot disprove what Solomonoff Induction might be given infinite resources, if this superficial analysis is right, without a way to compute the values so that they tend toward a limit for each of the probabilities needed, it is not a usable mathematical theorem. What uncomputable means is that any statement (most statements) drawn from it are matters of mathematical conjecture or opinion. It's like opinioning that the Godel sentence, given infinite resources, is decidable. I don't think the question of whether it is valid for infinite resources or not can be answered mathematically for the time being. And conclusions drawn from uncomputable results have to be considered dubious. However, it certainly leads to other questions which I think are more interesting and more useful. What is needed to promote greater insight about the problem of conditional probabilities in complicated situations where the probability emitters and the elementary sample space may be obscured by the use of complicated interactions and a preliminary focus on compound sample spaces? Are there theories, which like asking questions about the givens in a problem, that could lead toward a greater detection of the relation between the givens and the primary probability emitters and the primary sample space? Can a mathematical theory be based solely on abstract principles even though it is impossible to evaluate the use of those abstractions with examples from the particulars (of the abstractions)? How could those abstract principles be reliably defined so that they aren't too simplistic? Jim Bromer --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription:
Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction
Solomonoff Induction is not a mathematical conjecture. We can talk about a function which is based on all mathematical functions, but since we cannot define that as a mathematical function it is not a realizable function. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com
[agi] My Sing. U lecture on AGI blogged at Wired UK:
http://www.wired.co.uk/news/archive/2010-07/9/singularity-university-robotics-ai --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com
Re: [agi] My Sing. U lecture on AGI blogged at Wired UK:
How was your overall experience there, anything you learn that is worth mentioning? On Fri, Jul 9, 2010 at 2:46 PM, Ben Goertzel b...@goertzel.org wrote: http://www.wired.co.uk/news/archive/2010-07/9/singularity-university-robotics-ai --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Carlos A Mejia Taking life one singularity at a time. www.Transalchemy.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com
Re: [agi] My Sing. U lecture on AGI blogged at Wired UK:
I gave the lecture via Skype from my house in Maryland I learned that NASA has a crap Internet connection 8-D On Fri, Jul 9, 2010 at 2:50 PM, The Wizard key.unive...@gmail.com wrote: How was your overall experience there, anything you learn that is worth mentioning? On Fri, Jul 9, 2010 at 2:46 PM, Ben Goertzel b...@goertzel.org wrote: http://www.wired.co.uk/news/archive/2010-07/9/singularity-university-robotics-ai --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Carlos A Mejia Taking life one singularity at a time. www.Transalchemy.com *agi* | Archives https://www.listbox.com/member/archive/303/=now https://www.listbox.com/member/archive/rss/303/ | Modifyhttps://www.listbox.com/member/?;Your Subscription http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC CTO, Genescient Corp Vice Chairman, Humanity+ Advisor, Singularity University and Singularity Institute External Research Professor, Xiamen University, China b...@goertzel.org I admit that two times two makes four is an excellent thing, but if we are to give everything its due, two times two makes five is sometimes a very charming thing too. -- Fyodor Dostoevsky --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com
Re: [agi] My Sing. U lecture on AGI blogged at Wired UK:
Their earthly based internet probably has been downgrade to allow more bandwidth for the interplanetary internet ;-) On Fri, Jul 9, 2010 at 2:54 PM, Ben Goertzel b...@goertzel.org wrote: I gave the lecture via Skype from my house in Maryland I learned that NASA has a crap Internet connection 8-D On Fri, Jul 9, 2010 at 2:50 PM, The Wizard key.unive...@gmail.com wrote: How was your overall experience there, anything you learn that is worth mentioning? On Fri, Jul 9, 2010 at 2:46 PM, Ben Goertzel b...@goertzel.org wrote: http://www.wired.co.uk/news/archive/2010-07/9/singularity-university-robotics-ai --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Carlos A Mejia Taking life one singularity at a time. www.Transalchemy.com *agi* | Archives https://www.listbox.com/member/archive/303/=now https://www.listbox.com/member/archive/rss/303/ | Modifyhttps://www.listbox.com/member/?;Your Subscription http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC CTO, Genescient Corp Vice Chairman, Humanity+ Advisor, Singularity University and Singularity Institute External Research Professor, Xiamen University, China b...@goertzel.org I admit that two times two makes four is an excellent thing, but if we are to give everything its due, two times two makes five is sometimes a very charming thing too. -- Fyodor Dostoevsky *agi* | Archives https://www.listbox.com/member/archive/303/=now https://www.listbox.com/member/archive/rss/303/ | Modifyhttps://www.listbox.com/member/?;Your Subscription http://www.listbox.com -- Carlos A Mejia Taking life one singularity at a time. www.Transalchemy.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com
Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction
I guess the Godel Theorem is called a theorem, so Solomonoff Induction would be called a theorem. I believe that Solomonoff Induction is computable, but the claims that are made for it are not provable because there is no way you could prove that it approaches a stable limit (stable limits). You can't prove that it does just because the sense of all possible programs is so ill-defined that there is not enough to go on. Whether my outline of a disproof could actually be used to find an adequate disproof, I don't know. My attempt to disprove it may just be an unprovable theorem (or even wrong). Jim Bromer --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com