Steve Said: >>What will probably happen in such code is that it will find countless relationships, only a tiny fraction of which will make any sense to us humans.
There is a parallel problem in POS (Part Of Speech) parsers, that identify every part of speech that every word could possibly be, and then look for threads through the parts of speech that make grammatical sense. What ACTUALLY happens is that there are often/usually either multiple threads due to alternative unintended meanings of words, or no threads because we often omit superfluous words from our speech. I suspect the same will happen with the system you envision. Do you see any reason this would NOT be expected to happen?<< Steve, This is a variation of the noisy and lossy argument. I would say that the problems of noisy and lossy AI is pretty much the crux of the problem. Because the data will be noisy and lossy -(relative to what would be needed with our current knowledge to produce an effective and efficient analysis of the data given the potential to do so)- that means that variations of complexity becomes the problem. (Even if Steve was only talking about noisy data it is still a variation of the noisy and lossy argument because there are -at least- two sides of the coin. The input and intermediate input has to be compared to what has been previously learned). (If anyone else is reading this I am not talking about Neural Networks although I would integrate a neural network into my program at the drop of a dime if I thought there was a good reason to do so.) I believe that I have some good theories to deal with that problem - but at the same time I realize that I am going to run up against the knowledge complexity problem very quickly. So I fully expect the program to discover millions and billions of relations that won't make any sense to us. As far as I am concerned, that is the only problem, everything else is doable or avoidable. So to try to explain my interest again: I think I can make progress on this kind of problem but I also think that I will still be limited by the problem. The goal is to push the boundaries back. Dreaming of something more at this point would be naïve. There is ample evidence that by keeping knowledge simple enough a modern computer can handle these problems. The complexity comes crashing down when you want to extend the range of acquired knowledge much further. Like the np problem, complexity can grow at an astounding rate. Not only are there possible relations that the program can pick up, it must then explore other possible relations that it has not picked up in previous exposure to IO. So the -process- of 'understanding' is both noisy and lossy even for a discreet system. Where are the advantages? Well to start with a discreet system has the ability to store simple relations as well as higher relations derived from the base relations. The point is that it can do this much more efficiently than a neural network. So suppose that a derived weighted relational method works really well in most cases of a kind of problem but then for some unknown reason it fails miserably in another. If the program is capable of looking at the base relations it can begin to look into what went wrong. But wait a minute! Since I am talking about using language to describe the 'base relations' of my system aren't I deluding myself? No. I know as well as you that language is a manifestation of the problem (noisy, lossy and messy). So where am I coming from? One statement is inadequate to use as a base relation for the program when confronting this form of complexity. But my opinion is that it takes thousands of simple statements to 'understand' one simple statement. So the base relations (of a subject) is not comprised of a few statements but of thousands of statements. That is another advantage of my approach (over, for example, believing that weight-based relations would be sufficient in themselves to overcome noisy-lossy-complexity enough to show a little traction.) These advantages don't amount to anything but we have to be ready to utilize any advantage that we can get. Another advantage is that I seem to be the only person who gets that the generalization levels of natural language is extremely important so that tells me that that I have that advantage all to myself. Now there is another complexity problem. As you try to make a program more sophisticated, the programming becomes much more difficult and the run time slows way down. The first can become overcome by putting in long days and keeping the project active. The second can only be overcome by finding ways to make the program more efficient. That will take a lot of work and a lot of luck. I am not worried about the program finding relations that don't make any sense to me. That is totally expected. I am worried about the fact that if the program is going to keep a record of relations that aren't effective for some task then it will need to store some fundamentals of the relation along with a note about the tasks that it wasn't effective at! That means that it has to store something about every relation it ever tries. The only possible way (that I can see) to avoid that is by defining relations by class (ie by category). And by building heavily on relations that are effective and inhibiting the search area for radical alternatives to some extent. Jim Bromer On Tue, Nov 10, 2015 at 7:45 PM, Steve Richfield <[email protected]> wrote: > > Jim, > > IMHO your approach will live or die depending on examples you write, or fail > to write. > > If your thoughts are so ephemeral that you can't even conjure up an example, > then I strongly suspect that you haven't yet completely formed your thought. > > If you cannot produce examples, then you will never ever be able to debug it. > > Once the code sort of works, the next barrier becomes what I call > "heidenbugs" (heiden=good in German) - apparent bugs where the program is > actually working exactly as designed. These were a big part of debugging > DrEliza, where it correctly diagnosed things that were "obviously" wrong - > until I spend a day figuring out exactly what was happening in the > expectation of fixing a problem - only to discover that the answer it > produced was correct. > > What will probably happen in such code is that it will find countless > relationships, only a tiny fraction of which will make any sense to us humans. > > There is a parallel problem in POS (Part Of Speech) parsers, that identify > every part of speech that every word could possibly be, and then look for > threads through the parts of speech that make grammatical sense. What > ACTUALLY happens is that there are often/usually either multiple threads due > to alternative unintended meanings of words, or no threads because we often > omit superfluous words from our speech. I suspect the same will happen with > the system you envision. > > Do you see any reason this would NOT be expected to happen? > > Steve > > > > > On Tue, Nov 10, 2015 at 3:31 PM, Jim Bromer <[email protected]> wrote: >> >> Steve, >> A text based, would be semi-strong, AI program can 'experiment' with >> conceptual relations using text. I am not trying to create the equivalent of >> a 55 year old super-genius. I am trying to discover what has been missing in >> AI. I need to use simple experiments (my own simple experiments writing and >> improving my own AI program) to apply and develop my theories. One might >> complain that a text-only AI program is going to end up with stilted >> knowledge that does not have the vitality of human knowledge but so what? (A >> great deal of the knowledge that individuals possess is relatively stilted >> too.) I am looking for those methods that will acquire (admittedly limited) >> knowledge through something that appears like more natural language. So when >> I talk about using a Parallel Artificial Referent Language (PARL) I am not >> talking about using something that is like a purely artificial programming >> or a database language. I need to develop a program that can do a lot of >> learning on its own. So the PARL could be used to highlight relations >> between word-concepts in the text IO. This will not work perfectly because I >> need the AI program to be able to do some learning for itself. So at first >> it won't know anything but I can mark up the text with my PARL and it will >> start to make some trial and error experiments within the text-based IO to >> discover other relations that it can learn from both the text IO and from >> the PARL. And it will need to learn some syntax as well. So it will need to >> learn more than one thing from the exchanges. >> >> At a next stage it will have noted some possible relations between particles >> of text but it will not really know anything else about them. But at this >> point it should begin to pick up some knowledge that it can relate to these >> relations. My program is going to be largely based on establishing >> categories and particulars of a kind. However, I have previously emphasized >> in my writing that these relations are relative and even relativistic. So >> there is really no such thing as a base category or a elementary particular >> except that they can be said to be these things relative to some other >> objects. For example, I plan to use my PARL to help the program acquire some >> information about syntax. So does syntax describe base relations of the IO? >> No, because you need to develop concepts about syntax in order to use >> language to 'talk' about it. So while a syntactic relation may be very basic >> to the program I need the program to be able to 'conceptualize' references >> to syntax in order to facilitate learning.. Even though the basics of my >> program are based on some very old AI paradigms, when you start to think >> this out a little more deeply you realize that 1970s Old AI really did not >> get down (so to speak using the lingo of the day.) So in a tight programming >> language you can use implicit relations that are defined without the program >> being able to understand anything else about them but if you want to achieve >> a more natural way to talk to the program about what you want it to do it >> has to be able to conceptualize fundamental relations. >> >> As it learns more about some simple subjects it should become more clear to >> the user what kind of mistakes the program is making. Here some simple PARL >> relations may help clarify the confusion and gradually simple text >> statements should be powerful enough to help to clarify the relations (that >> I want the program to note and work with.) >> >> Ambiguity and ambiguity like relations sometimes need to be resolved and >> sometimes they don't. This very simple fundamental philosophical insight >> seems to be something that few people ever acknowledge when they are >> discussing the problem of writing better AI programs with me. However, many >> linguists seem to clearly understand this principle in human communication. >> >> I have often said that in order to know one thing you have to know many >> hundreds of related things. So while studying probability and Bayesian >> Networks more carefully is on my to-do list, I have never suggested that >> weight-based reasoning would solve the contemporary AI problem. My program >> would, if I ever get that far, retain hundreds of pieces of knowledge >> related to some subject (to some subjects) and then it could analyze these >> collections in different ways as needed. I assume that I would use weighted >> reasoning when it works but the point is that the weighted analyses would >> not be the standard raw product of learning. Weight-based analyses would be >> one kind of derived product that could be used but they would be only one of >> many different kinds of derived products. The program would also be able to >> learn new ways to analyze related collections of data (pieces of knowledge.) >> >> Of course I realize that this will not be easy. But with the PARL it should >> be feasible to test crude prototypes of relations between data objects >> (concepts and such) which I might be able to use to further develop my >> program. When used with the right kind of AI program the Parallel Artificial >> Referent Language has the potential to be something very simple to use. That >> should make it more interesting to novices. However, it will not be a simple >> language to use in all cases because the utilization would be dependent on >> how the AI program subsequently used the artificial referent information. >> But if I can ever get anywhere with this project then I would be able to >> write a specialized AI program that would work in a more intuitive way with >> the PARL. This specialized AI program might not be a great AI program but if >> it is simple to use then a variety of young programmers might be interested >> in trying it. >> >> One other thing. I am interested in the generalization-particularization >> relations. For instance, I chose to try to imagine how my program would work >> using abstractions rather than particular examples. Why? Because my program >> will not be written about 'rain' or 'cars' or 'people'. It will be written >> at a more abstract level where the relations between individual >> concept-objects will be based on general methods of analysis and synthesis >> (or integration) that, for the most part, will be learned or derived from >> experience. >> >> Jim Bromer ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
