RE: [agi] Re: Compressed Cross-Indexed Concepts
Mike, To put it into your own words here, mathematics is a delineation out of the infinitely diversifiable, the same zone where design comes from. And design needs a medium, the medium can be the symbolic expressions and language of mathematics. And so conveniently here the mathematics is expressible in a software language, computer system and database. Don't forget, the designer in all of us needs a medium to express and communicate, if not it remains in a void. A designer emits design, and in this case, AGI, the design is the/a designer. Sounds kind of hokey but true. there are other narrow cases where this is true, but not in the grand way AGI is. IOW, in a way, AGI will design itself, it's coming out of the infinitely diversifiable and maintaining a communication with it as a delineation within itself. It's self-organizingly injecting itself into this chaotic world via our intended or unintended manifestations. John From: Mike Tintner [mailto:tint...@blueyonder.co.uk] JAR: Define infinitely diversifiable. I just did more or less. A form/shape can be said to be delineated (although I'm open to alternative terms, because delineation needn't consist of using lines as such - as in my examples, it could involve using amorphous masses, or pseudo-lines). Diversification - in this case creating new kinds of font - therefore involves using 1) new principles of delineation - the kinds of lines/visual elements used are radically changed, and 2) new principles of **arrangement** of the visual elements - for example, various fonts there can be said to conform to an A arrangement, but one or more shifted that to a new triangle arrangement without any cross-bar in the middle; using double/triple lines could be classified as either 1) or 2) I guess. An innovative (although pos. PITA) arrangement would be to have elements that move/are mobile. And delineation involves 3) introducing new kinds of elements *in addition* to those already there or deleting existing kinds of elements. Diversifiable is merely recognizing the realities of the fields of art and design, which is that they will - and a creative algorithm therefore would have to be able to - infinitely/endlessly transform the constitution and principles of delineation and depiction of any and all forms. I think part of the problem here is that you guys think like mathematicians and not designers - you see the world in terms of more or less rigidly structured abstract forms ( that allows for all geometric morphisms) - but a designer has to think consciously or unconsciously much more fluidly in terms of kaleidomorphic, freely structured and fluidly morphable abstract forms. He sees abstract forms as infinitely diversifiable. You don't. To do AGI, I'm suggesting - in fact, I'm absolutely sure - you will have to start thinking in addition like designers. If you have contempt for design, as most people here seem to do, it is actually you who deserve contempt. God was a designer long before He took up maths. From: J. Andrew Rogers mailto:jar.mail...@gmail.com Sent: Wednesday, August 25, 2010 5:23 PM To: AGI mailto:a...@listbox.com Subject: Re: [agi] Re: Compressed Cross-Indexed Concepts On Wed, Aug 25, 2010 at 9:09 AM, Mike Tintner tint...@blueyonder.co.uk wrote: You do understand BTW that your creative algorithm must be able to produce not just a limited collection of shapes [either squares or A's] but an infinitely diversifiable** collection. Define infinitely diversifiable. There are whole fields of computer science dedicated to small applications that routinely generate effectively unbounded diversity in the strongest possible sense. -- J. Andrew Rogers AGI | https://www.listbox.com/member/archive/303/=now Archives https://www.listbox.com/member/archive/rss/303/ Description: Image removed by sender.| https://www.listbox.com/member/?; Modify Your Subscription http://www.listbox.com/ Description: Image removed by sender. AGI | https://www.listbox.com/member/archive/303/=now Archives https://www.listbox.com/member/archive/rss/303/ Description: Image removed by sender.| https://www.listbox.com/member/?; Modify Your Subscription http://www.listbox.com/ Description: Image removed by sender. --- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com ~WRD000.jpg
Re: [agi] Re: Compressed Cross-Indexed Concepts
Someone who really believes that P=NP should go to Saudi Arabia or the Emirates and crack the Blackberry code. - Ian Parker On 12 August 2010 06:10, John G. Rose johnr...@polyplexic.com wrote: -Original Message- From: Jim Bromer [mailto:jimbro...@gmail.com] Re: [agi] Re: Compressed Cross-Indexed Concepts David, I am not a mathematician although I do a lot of computer- related mathematical work of course. My remark was directed toward John who had suggested that he thought that there is some sophisticated mathematical sub system that would (using my words here) provide such a substantial benefit to AGI that its lack may be at the core of the contemporary problem. I was saying that unless this required mathemagic then a scalable AGI system demonstrating how effective this kind of mathematical advancement could probably be simulated using contemporary mathematics. This is not the same as saying that AGI is solvable by sanitized formal representations any more than saying that your message is a sanitized formal statement because it was dependent on a lot of computer mathematics in order to send it. In other words I was challenging John at that point to provide some kind of evidence for his view. I don't know if we need to create some new mathemagics, a breakthrough, or whatever. I just think using existing math to engineer it, using the math like if was software is what should be done. But you may be right perhaps proof of P=NP something similar is needed. I don't think so though. The main goal would be to leverage existing math to compensate for unnecessary and/or impossible computation. We don't need to re-evolve the wheel as we already figured that out. And computers are v. slow compared to other physical computations that are performed in the natural physical world. Maybe not - developing a system from scratch that discovers all of the discoveries over the millennia of science and civilization? Would that be possible? I then went on to say, that for example, I think that fast SAT solutions would make scalable AGI possible (that is, scalable up to a point that is way beyond where we are now), and therefore I believe that I could create a simulation of an AGI program to demonstrate what I am talking about. (A simulation is not the same as the actual thing.) I didn't say, nor did I imply, that the mathematics would be all there is to it. I have spent a long time thinking about the problems of applying formal and informal systems to 'real world' (or other world) problems and the application of methods is a major part of my AGI theories. I don't expect you to know all of my views on the subject but I hope you will keep this in mind for future discussions. Using available skills and tools the best we can use them. And, inventing new tools by engineering utilitarian and efficient mathematical structure. Math is just like software in all this but way more powerful. And using the right math, the most general where it is called for and specific/narrow when needed. I don't see a problem with the specific most of the time but I don't know if many people get the general. Though it may be an error or lack of understanding on my part... John --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com
Re: [agi] Re: Compressed Cross-Indexed Concepts
This seems to be an overly simplistic view of AGI from a mathematician. It's kind of funny how people over emphasize what they know or depend on their current expertise too much when trying to solve new problems. I don't think it makes sense to apply sanitized and formal mathematical solutions to AGI. What reason do we have to believe that the problems we face when developing AGI are solvable by such formal representations? What reason do we have to think we can represent the problems as an instance of such mathematical problems? We have to start with the specific problems we are trying to solve, analyze what it takes to solve them, and then look for and design a solution. Starting with the solution and trying to hack the problem to fit it is not going to work for AGI, in my opinion. I could be wrong, but I would need some evidence to think otherwise. Dave On Wed, Aug 11, 2010 at 10:39 AM, Jim Bromer jimbro...@gmail.com wrote: You probably could show that a sophisticated mathematical structure would produce a scalable AGI program if is true, using contemporary mathematical models to simulate it. However, if scalability was completely dependent on some as yet undiscovered mathemagical principle, then you couldn't. For example, I think polynomial time SAT would solve a lot of problems with contemporary AGI. So I believe this could be demonstrated on a simulation. That means, that I could demonstrate effective AGI that works so long as the SAT problems are easily solved. If the program reported that a complicated logical problem could not be solved, the user could provide his insight into the problem at those times to help with the problem. This would not work exactly as hoped, but by working from there, I believe that I would be able to determine better ways to develop such a program so it would work better - if my conjecture about the potential efficacy of polynomial time SAT for AGI was true. Jim Bromer On Mon, Aug 9, 2010 at 6:11 PM, Jim Bromer jimbro...@gmail.com wrote: On Mon, Aug 9, 2010 at 4:57 PM, John G. Rose johnr...@polyplexic.comwrote: -Original Message- From: Jim Bromer [mailto:jimbro...@gmail.com] how would these diverse examples be woven into highly compressed and heavily cross-indexed pieces of knowledge that could be accessed quickly and reliably, especially for the most common examples that the person is familiar with. This is a big part of it and for me the most exciting. And I don't think that this subsystem would take up millions of lines of code either. It's just that it is a *very* sophisticated and dynamic mathematical structure IMO. John Well, if it was a mathematical structure then we could start developing prototypes using familiar mathematical structures. I think the structure has to involve more ideological relationships than mathematical. For instance you can apply a idea to your own thinking in a such a way that you are capable of (gradually) changing how you think about something. This means that an idea can be a compression of some greater change in your own programming. While the idea in this example would be associated with a fairly strong notion of meaning, since you cannot accurately understand the full consequences of the change it would be somewhat vague at first. (It could be a very precise idea capable of having strong effect, but the details of those effects would not be known until the change had progressed.) I think the more important question is how does a general concept be interpreted across a range of different kinds of ideas. Actually this is not so difficult, but what I am getting at is how are sophisticated conceptual interrelations integrated and resolved? Jim *agi* | Archives https://www.listbox.com/member/archive/303/=now https://www.listbox.com/member/archive/rss/303/ | Modifyhttps://www.listbox.com/member/?;Your Subscription http://www.listbox.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com
Re: [agi] Re: Compressed Cross-Indexed Concepts
David, I am not a mathematician although I do a lot of computer-related mathematical work of course. My remark was directed toward John who had suggested that he thought that there is some sophisticated mathematical sub system that would (using my words here) provide such a substantial benefit to AGI that its lack may be at the core of the contemporary problem. I was saying that unless this required mathemagic then a scalable AGI system demonstrating how effective this kind of mathematical advancement could probably be simulated using contemporary mathematics. This is not the same as saying that AGI is solvable by sanitized formal representations any more than saying that your message is a sanitized formal statement because it was dependent on a lot of computer mathematics in order to send it. In other words I was challenging John at that point to provide some kind of evidence for his view. I then went on to say, that for example, I think that fast SAT solutions would make scalable AGI possible (that is, scalable up to a point that is way beyond where we are now), and therefore I believe that I could create a simulation of an AGI program to demonstrate what I am talking about. (A simulation is not the same as the actual thing.) I didn't say, nor did I imply, that the mathematics would be all there is to it. I have spent a long time thinking about the problems of applying formal and informal systems to 'real world' (or other world) problems and the application of methods is a major part of my AGI theories. I don't expect you to know all of my views on the subject but I hope you will keep this in mind for future discussions. Jim Bromer On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.com wrote: This seems to be an overly simplistic view of AGI from a mathematician. It's kind of funny how people over emphasize what they know or depend on their current expertise too much when trying to solve new problems. I don't think it makes sense to apply sanitized and formal mathematical solutions to AGI. What reason do we have to believe that the problems we face when developing AGI are solvable by such formal representations? What reason do we have to think we can represent the problems as an instance of such mathematical problems? We have to start with the specific problems we are trying to solve, analyze what it takes to solve them, and then look for and design a solution. Starting with the solution and trying to hack the problem to fit it is not going to work for AGI, in my opinion. I could be wrong, but I would need some evidence to think otherwise. Dave On Wed, Aug 11, 2010 at 10:39 AM, Jim Bromer jimbro...@gmail.comwrote: You probably could show that a sophisticated mathematical structure would produce a scalable AGI program if is true, using contemporary mathematical models to simulate it. However, if scalability was completely dependent on some as yet undiscovered mathemagical principle, then you couldn't. For example, I think polynomial time SAT would solve a lot of problems with contemporary AGI. So I believe this could be demonstrated on a simulation. That means, that I could demonstrate effective AGI that works so long as the SAT problems are easily solved. If the program reported that a complicated logical problem could not be solved, the user could provide his insight into the problem at those times to help with the problem. This would not work exactly as hoped, but by working from there, I believe that I would be able to determine better ways to develop such a program so it would work better - if my conjecture about the potential efficacy of polynomial time SAT for AGI was true. Jim Bromer On Mon, Aug 9, 2010 at 6:11 PM, Jim Bromer jimbro...@gmail.com wrote: On Mon, Aug 9, 2010 at 4:57 PM, John G. Rose johnr...@polyplexic.comwrote: -Original Message- From: Jim Bromer [mailto:jimbro...@gmail.com] how would these diverse examples be woven into highly compressed and heavily cross-indexed pieces of knowledge that could be accessed quickly and reliably, especially for the most common examples that the person is familiar with. This is a big part of it and for me the most exciting. And I don't think that this subsystem would take up millions of lines of code either. It's just that it is a *very* sophisticated and dynamic mathematical structure IMO. John Well, if it was a mathematical structure then we could start developing prototypes using familiar mathematical structures. I think the structure has to involve more ideological relationships than mathematical. For instance you can apply a idea to your own thinking in a such a way that you are capable of (gradually) changing how you think about something. This means that an idea can be a compression of some greater change in your own programming. While the idea in this example would be associated with a fairly strong notion of meaning,
Re: [agi] Re: Compressed Cross-Indexed Concepts
Jim, Fair enough. My apologies then. I just often see your posts on SAT or other very formal math problems and got the impression that you thought this was at the core of AGI's problems and that pursuing a fast solution to NP-complete problems is the best way to solve it. At least, that was my impression. So, my thought was that such formal methods don't seem to be a complete solution at all and other factors, such as uncertainty, could make such formal solutions ineffective or unusable. Which is why I said it's important to analyze the requirements of the problem and then apply a solution. Dave On Wed, Aug 11, 2010 at 1:02 PM, Jim Bromer jimbro...@gmail.com wrote: David, I am not a mathematician although I do a lot of computer-related mathematical work of course. My remark was directed toward John who had suggested that he thought that there is some sophisticated mathematical sub system that would (using my words here) provide such a substantial benefit to AGI that its lack may be at the core of the contemporary problem. I was saying that unless this required mathemagic then a scalable AGI system demonstrating how effective this kind of mathematical advancement could probably be simulated using contemporary mathematics. This is not the same as saying that AGI is solvable by sanitized formal representations any more than saying that your message is a sanitized formal statement because it was dependent on a lot of computer mathematics in order to send it. In other words I was challenging John at that point to provide some kind of evidence for his view. I then went on to say, that for example, I think that fast SAT solutions would make scalable AGI possible (that is, scalable up to a point that is way beyond where we are now), and therefore I believe that I could create a simulation of an AGI program to demonstrate what I am talking about. (A simulation is not the same as the actual thing.) I didn't say, nor did I imply, that the mathematics would be all there is to it. I have spent a long time thinking about the problems of applying formal and informal systems to 'real world' (or other world) problems and the application of methods is a major part of my AGI theories. I don't expect you to know all of my views on the subject but I hope you will keep this in mind for future discussions. Jim Bromer On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.comwrote: This seems to be an overly simplistic view of AGI from a mathematician. It's kind of funny how people over emphasize what they know or depend on their current expertise too much when trying to solve new problems. I don't think it makes sense to apply sanitized and formal mathematical solutions to AGI. What reason do we have to believe that the problems we face when developing AGI are solvable by such formal representations? What reason do we have to think we can represent the problems as an instance of such mathematical problems? We have to start with the specific problems we are trying to solve, analyze what it takes to solve them, and then look for and design a solution. Starting with the solution and trying to hack the problem to fit it is not going to work for AGI, in my opinion. I could be wrong, but I would need some evidence to think otherwise. Dave On Wed, Aug 11, 2010 at 10:39 AM, Jim Bromer jimbro...@gmail.comwrote: You probably could show that a sophisticated mathematical structure would produce a scalable AGI program if is true, using contemporary mathematical models to simulate it. However, if scalability was completely dependent on some as yet undiscovered mathemagical principle, then you couldn't. For example, I think polynomial time SAT would solve a lot of problems with contemporary AGI. So I believe this could be demonstrated on a simulation. That means, that I could demonstrate effective AGI that works so long as the SAT problems are easily solved. If the program reported that a complicated logical problem could not be solved, the user could provide his insight into the problem at those times to help with the problem. This would not work exactly as hoped, but by working from there, I believe that I would be able to determine better ways to develop such a program so it would work better - if my conjecture about the potential efficacy of polynomial time SAT for AGI was true. Jim Bromer On Mon, Aug 9, 2010 at 6:11 PM, Jim Bromer jimbro...@gmail.com wrote: On Mon, Aug 9, 2010 at 4:57 PM, John G. Rose johnr...@polyplexic.comwrote: -Original Message- From: Jim Bromer [mailto:jimbro...@gmail.com] how would these diverse examples be woven into highly compressed and heavily cross-indexed pieces of knowledge that could be accessed quickly and reliably, especially for the most common examples that the person is familiar with. This is a big part of it and for me the most exciting. And I don't think that this
Re: [agi] Re: Compressed Cross-Indexed Concepts
On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.com wrote: I don't think it makes sense to apply sanitized and formal mathematical solutions to AGI. What reason do we have to believe that the problems we face when developing AGI are solvable by such formal representations? What reason do we have to think we can represent the problems as an instance of such mathematical problems? We have to start with the specific problems we are trying to solve, analyze what it takes to solve them, and then look for and design a solution. Starting with the solution and trying to hack the problem to fit it is not going to work for AGI, in my opinion. I could be wrong, but I would need some evidence to think otherwise. I agree that disassociated theories have not proved to be very successful at AGI, but then again what has? I would use a mathematical method that gave me the number or percentage of True cases that satisfy a propositional formula as a way to check the internal logic of different combinations of logic-based conjectures. Since methods that can do this with logical variables for any logical system that goes (a little) past 32 variables are feasible the potential of this method should be easy to check (although it would hit a rather low ceiling of scalability). So I do think that logic and other mathematical methods would help in true AGI programs. However, the other major problem, as I see it, is one of application. And strangely enough, this application problem is so pervasive, that it means that you cannot even develop artificial opinions! You can program the computer to jump on things that you expect it to see, and you can program it to create theories about random combinations of objects, but how could you have a true opinion without child-level judgement? This may sound like frivolous philosophy but I think it really shows that the starting point isn't totally beyond us. Jim Bromer On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.com wrote: This seems to be an overly simplistic view of AGI from a mathematician. It's kind of funny how people over emphasize what they know or depend on their current expertise too much when trying to solve new problems. I don't think it makes sense to apply sanitized and formal mathematical solutions to AGI. What reason do we have to believe that the problems we face when developing AGI are solvable by such formal representations? What reason do we have to think we can represent the problems as an instance of such mathematical problems? We have to start with the specific problems we are trying to solve, analyze what it takes to solve them, and then look for and design a solution. Starting with the solution and trying to hack the problem to fit it is not going to work for AGI, in my opinion. I could be wrong, but I would need some evidence to think otherwise. Dave On Wed, Aug 11, 2010 at 10:39 AM, Jim Bromer jimbro...@gmail.comwrote: You probably could show that a sophisticated mathematical structure would produce a scalable AGI program if is true, using contemporary mathematical models to simulate it. However, if scalability was completely dependent on some as yet undiscovered mathemagical principle, then you couldn't. For example, I think polynomial time SAT would solve a lot of problems with contemporary AGI. So I believe this could be demonstrated on a simulation. That means, that I could demonstrate effective AGI that works so long as the SAT problems are easily solved. If the program reported that a complicated logical problem could not be solved, the user could provide his insight into the problem at those times to help with the problem. This would not work exactly as hoped, but by working from there, I believe that I would be able to determine better ways to develop such a program so it would work better - if my conjecture about the potential efficacy of polynomial time SAT for AGI was true. Jim Bromer On Mon, Aug 9, 2010 at 6:11 PM, Jim Bromer jimbro...@gmail.com wrote: On Mon, Aug 9, 2010 at 4:57 PM, John G. Rose johnr...@polyplexic.comwrote: -Original Message- From: Jim Bromer [mailto:jimbro...@gmail.com] how would these diverse examples be woven into highly compressed and heavily cross-indexed pieces of knowledge that could be accessed quickly and reliably, especially for the most common examples that the person is familiar with. This is a big part of it and for me the most exciting. And I don't think that this subsystem would take up millions of lines of code either. It's just that it is a *very* sophisticated and dynamic mathematical structure IMO. John Well, if it was a mathematical structure then we could start developing prototypes using familiar mathematical structures. I think the structure has to involve more ideological relationships than mathematical. For instance you can apply a idea to your own thinking in a
Re: [agi] Re: Compressed Cross-Indexed Concepts
I've made two ultra-brilliant statements in the past few days. One is that a concept can simultaneously be both precise and vague. And the other is that without judgement even opinions are impossible. (Ok, those two statements may not be ultra-brilliant but they are brilliant right? Ok, maybe not truly brilliant, but highly insightful and perspicuously intelligent... Or at least interesting to the cognoscenti maybe?.. Well, they were interesting to me at least.) Ok, these two interesting-to-me comments made by me are interesting because they suggest that we do not know how to program a computer even to create opinions. Or if we do, there is a big untapped difference between those programs that show nascent judgement (perhaps only at levels relative to the domain of their capabilities) and those that don't. This is AGI programmer's utopia. (Or at least my utopia). Because I need to find something that is simple enough for me to start with and which can lend itself to develop and test theories of AGI judgement and scalability. By allowing an AGI program to participate more in the selection of its own primitive 'interests' we will be able to interact with it, both as programmer and as user, to guide it toward selecting those interests which we can understand and seem interesting to us. By creating an AGI program that has a faculty for primitive judgement (as we might envision such an ability), and then testing the capabilities in areas where the program seems to work more effectively, we might be better able to develop more powerful AGI theories that show greater scalability, so long as we are able to understand what interests the program is pursuing. Jim Bromer On Wed, Aug 11, 2010 at 1:40 PM, Jim Bromer jimbro...@gmail.com wrote: On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.comwrote: I don't think it makes sense to apply sanitized and formal mathematical solutions to AGI. What reason do we have to believe that the problems we face when developing AGI are solvable by such formal representations? What reason do we have to think we can represent the problems as an instance of such mathematical problems? We have to start with the specific problems we are trying to solve, analyze what it takes to solve them, and then look for and design a solution. Starting with the solution and trying to hack the problem to fit it is not going to work for AGI, in my opinion. I could be wrong, but I would need some evidence to think otherwise. I agree that disassociated theories have not proved to be very successful at AGI, but then again what has? I would use a mathematical method that gave me the number or percentage of True cases that satisfy a propositional formula as a way to check the internal logic of different combinations of logic-based conjectures. Since methods that can do this with logical variables for any logical system that goes (a little) past 32 variables are feasible the potential of this method should be easy to check (although it would hit a rather low ceiling of scalability). So I do think that logic and other mathematical methods would help in true AGI programs. However, the other major problem, as I see it, is one of application. And strangely enough, this application problem is so pervasive, that it means that you cannot even develop artificial opinions! You can program the computer to jump on things that you expect it to see, and you can program it to create theories about random combinations of objects, but how could you have a true opinion without child-level judgement? This may sound like frivolous philosophy but I think it really shows that the starting point isn't totally beyond us. Jim Bromer On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.comwrote: This seems to be an overly simplistic view of AGI from a mathematician. It's kind of funny how people over emphasize what they know or depend on their current expertise too much when trying to solve new problems. I don't think it makes sense to apply sanitized and formal mathematical solutions to AGI. What reason do we have to believe that the problems we face when developing AGI are solvable by such formal representations? What reason do we have to think we can represent the problems as an instance of such mathematical problems? We have to start with the specific problems we are trying to solve, analyze what it takes to solve them, and then look for and design a solution. Starting with the solution and trying to hack the problem to fit it is not going to work for AGI, in my opinion. I could be wrong, but I would need some evidence to think otherwise. Dave On Wed, Aug 11, 2010 at 10:39 AM, Jim Bromer jimbro...@gmail.comwrote: You probably could show that a sophisticated mathematical structure would produce a scalable AGI program if is true, using contemporary mathematical models to simulate it. However, if scalability was
Re: [agi] Re: Compressed Cross-Indexed Concepts
Slightly off the topic of your last email. But, all this discussion has made me realize how to phrase something... That is that solving AGI requires understand the constraints that problems impose on a solution. So, it's sort of a unbelievably complex constraint satisfaction problem. What we've been talking about is how we come up with solutions to these problems when we sometimes aren't actually trying to solve any of the real problems. As I've been trying to articulate lately is that in order to satisfy the constraints of the problems AGI imposes, we must really understand the problems we want to solve and how they can be solved(their constraints). I think that most of us do not do this because the problem is so complex, that we refuse to attempt to understand all of its constraints. Instead we focus on something very small and manageable with fewer constraints. But, that's what creates narrow AI, because the constraints you have developed the solution for only apply to a narrow set of problems. Once you try to apply it to a different problem that imposes new, incompatible constraints, the solution fails. So, lately I've been pushing for people to truly analyze the problems involved in AGI, step by step to understand what the constraints are. I think this is the only way we will develop a solution that is guaranteed to work without wasting undo time in trial and error. I don't think trial and error approaches will work. We must know what the constraints are, instead of guessing at what solutions might approximate the constraints. I think the problem space is too large to guess. Of course, I think acquisition of knowledge through automated means is the first step in understanding these constraints. But, unfortunately, few agree with me. Dave On Wed, Aug 11, 2010 at 3:44 PM, Jim Bromer jimbro...@gmail.com wrote: I've made two ultra-brilliant statements in the past few days. One is that a concept can simultaneously be both precise and vague. And the other is that without judgement even opinions are impossible. (Ok, those two statements may not be ultra-brilliant but they are brilliant right? Ok, maybe not truly brilliant, but highly insightful and perspicuously intelligent... Or at least interesting to the cognoscenti maybe?.. Well, they were interesting to me at least.) Ok, these two interesting-to-me comments made by me are interesting because they suggest that we do not know how to program a computer even to create opinions. Or if we do, there is a big untapped difference between those programs that show nascent judgement (perhaps only at levels relative to the domain of their capabilities) and those that don't. This is AGI programmer's utopia. (Or at least my utopia). Because I need to find something that is simple enough for me to start with and which can lend itself to develop and test theories of AGI judgement and scalability. By allowing an AGI program to participate more in the selection of its own primitive 'interests' we will be able to interact with it, both as programmer and as user, to guide it toward selecting those interests which we can understand and seem interesting to us. By creating an AGI program that has a faculty for primitive judgement (as we might envision such an ability), and then testing the capabilities in areas where the program seems to work more effectively, we might be better able to develop more powerful AGI theories that show greater scalability, so long as we are able to understand what interests the program is pursuing. Jim Bromer On Wed, Aug 11, 2010 at 1:40 PM, Jim Bromer jimbro...@gmail.com wrote: On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.comwrote: I don't think it makes sense to apply sanitized and formal mathematical solutions to AGI. What reason do we have to believe that the problems we face when developing AGI are solvable by such formal representations? What reason do we have to think we can represent the problems as an instance of such mathematical problems? We have to start with the specific problems we are trying to solve, analyze what it takes to solve them, and then look for and design a solution. Starting with the solution and trying to hack the problem to fit it is not going to work for AGI, in my opinion. I could be wrong, but I would need some evidence to think otherwise. I agree that disassociated theories have not proved to be very successful at AGI, but then again what has? I would use a mathematical method that gave me the number or percentage of True cases that satisfy a propositional formula as a way to check the internal logic of different combinations of logic-based conjectures. Since methods that can do this with logical variables for any logical system that goes (a little) past 32 variables are feasible the potential of this method should be easy to check (although it would hit a rather low ceiling of scalability). So I do think that logic and
Re: [agi] Re: Compressed Cross-Indexed Concepts
I guess what I was saying was that I can test my mathematical theory and my theories about primitive judgement both at the same time by trying to find those areas where the program seems to be good at something. For example, I found that it was easy to write a program that found outlines where there was some contrast between a solid object and whatever was in the background or whatever was in the foreground. Now I, as an artist could use that to create interesting abstractions. However, that does not mean that an AGI program that was supposed to learn and acquire greater judgement based on my ideas for a primitive judgement would be able to do that. Instead, I would let it do what it seemed good at, so long as I was able to appreciate what it was doing. Since this would lead to something - a next step at least - I could use this to test my theory that a good more general SAT solution would be useful as well. Jim Bromer On Wed, Aug 11, 2010 at 3:57 PM, David Jones davidher...@gmail.com wrote: Slightly off the topic of your last email. But, all this discussion has made me realize how to phrase something... That is that solving AGI requires understand the constraints that problems impose on a solution. So, it's sort of a unbelievably complex constraint satisfaction problem. What we've been talking about is how we come up with solutions to these problems when we sometimes aren't actually trying to solve any of the real problems. As I've been trying to articulate lately is that in order to satisfy the constraints of the problems AGI imposes, we must really understand the problems we want to solve and how they can be solved(their constraints). I think that most of us do not do this because the problem is so complex, that we refuse to attempt to understand all of its constraints. Instead we focus on something very small and manageable with fewer constraints. But, that's what creates narrow AI, because the constraints you have developed the solution for only apply to a narrow set of problems. Once you try to apply it to a different problem that imposes new, incompatible constraints, the solution fails. So, lately I've been pushing for people to truly analyze the problems involved in AGI, step by step to understand what the constraints are. I think this is the only way we will develop a solution that is guaranteed to work without wasting undo time in trial and error. I don't think trial and error approaches will work. We must know what the constraints are, instead of guessing at what solutions might approximate the constraints. I think the problem space is too large to guess. Of course, I think acquisition of knowledge through automated means is the first step in understanding these constraints. But, unfortunately, few agree with me. Dave On Wed, Aug 11, 2010 at 3:44 PM, Jim Bromer jimbro...@gmail.com wrote: I've made two ultra-brilliant statements in the past few days. One is that a concept can simultaneously be both precise and vague. And the other is that without judgement even opinions are impossible. (Ok, those two statements may not be ultra-brilliant but they are brilliant right? Ok, maybe not truly brilliant, but highly insightful and perspicuously intelligent... Or at least interesting to the cognoscenti maybe?.. Well, they were interesting to me at least.) Ok, these two interesting-to-me comments made by me are interesting because they suggest that we do not know how to program a computer even to create opinions. Or if we do, there is a big untapped difference between those programs that show nascent judgement (perhaps only at levels relative to the domain of their capabilities) and those that don't. This is AGI programmer's utopia. (Or at least my utopia). Because I need to find something that is simple enough for me to start with and which can lend itself to develop and test theories of AGI judgement and scalability. By allowing an AGI program to participate more in the selection of its own primitive 'interests' we will be able to interact with it, both as programmer and as user, to guide it toward selecting those interests which we can understand and seem interesting to us. By creating an AGI program that has a faculty for primitive judgement (as we might envision such an ability), and then testing the capabilities in areas where the program seems to work more effectively, we might be better able to develop more powerful AGI theories that show greater scalability, so long as we are able to understand what interests the program is pursuing. Jim Bromer On Wed, Aug 11, 2010 at 1:40 PM, Jim Bromer jimbro...@gmail.com wrote: On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.comwrote: I don't think it makes sense to apply sanitized and formal mathematical solutions to AGI. What reason do we have to believe that the problems we face when developing AGI are solvable by such formal representations? What reason do