On Mon, Apr 22, 2013 at 4:13 PM, Jim Bromer <[email protected]> wrote:
> Logan,**** > > Thanks for your comments. > > > > I agree of course that concepts and concept integration may be represented > by words and sentences. I was trying to say that many of the > complications that will arise using word-concepts will arise using some > other kinds of referential concepts. One of the reasons that I am > convinced that text-only AGI is a good way to go is because there is such a > potential for expressiveness and the representation of different kinds of > ideas. It is often difficult to express complicated ideas using words > because they are not substitutes for the implementations of the things that > we are talking about. > We can implement anything using words, from programs through bridges to relationships. > However, that does not mean that they cannot be used as representations of > ideas. I understand what I am talking about even though other people do > not.**** > > ** > That simply indicates a need to enhance your communication skills. > ** > > I believe that when we acquire a learned habit the parts of the habit may > not be directly understandable but can only be approached indirectly by > referring to something else. For instance a learned action may be > created by a string of action potentials (for a lack of a better name) and > it may be that the only way to detect the parts of the string is by noting > the whole, more complicated action. > Ya, many voice to text parsers work like that, however they aren't very good at understanding new phrases, or different ways of saying things. Both are necessary, likely with some supervised learning i.e. "what did you mean by that?" giving a target, for optimal results. > Or we may infer the action by some other action or other event that is > roughly correlated with the inferred action. But essentially, when we > are capable of reflection (meta-cognition) we are able to ‘understand’ a > concept potential if we know something more about how to use and integrate > the concept. So by having some kind of understanding of a concept > potential we can consciously try to use it in different ways based on some > kind of reasoning. Now, if are not explicitly aware of the concept > potential there may be a chance that we can infer something about it > indirectly just as we might infer something about an action potential.**** > > ** ** > > I believe that the theory that it takes many statements to understand one > simple statement has a great deal of value. > generally you need to write and interpreter or compiler, to "understand" i.e. compile or interpret a statement. > Concepts are relativistic. That means that when a simple concept is used > in association with other concepts the meaning of the concept can vary. > Concepts > are contextual. But there are more problems. Concepts are > interdependent. There is not (necessarily) an independent concept and a > dependent concept in a conceptual function the way there are in a > mathematical function. > Actually there are a whole host of such axiomatic concepts. If there weren't we'd just be wishy washy not really saying anything all the time. > So this means that it can be very difficult to determine the meaning of a > combination of concepts if the program does not explicitly contain a > reference to that particular combination. > That is completely false, it's like saying computer-programming languages contain references to every particular combination, when in fact you only need to understand the sub-components. Similar to how you don't need to know every story in conceivability to listen to a new story and derive meaning from it. In fact the very process of acquiring new information falsifies your hypothesis. > One way to work with this problem is to rely on generalization systems > in which the systems of generalizations of a collection of concepts can be > used to guide in the decoding of a particular string of concepts which > haven’t been seen before. However, when this was tried in the simplistic > fashion of the discrete text based programs of the 60’s it did not > produce intelligence. > can you give some examples? Cause C, fortran and a host of other discrete text string concepts happened and seem to have produced significant intelligence, i.e. beating world chess champions among a multitude of other achievements. > So in the 70’s weighted reasoning became all the rage because it looked > like it might be used to infer subtle differences in the strings that > simple discretion substitution did not. However, this promise did not > hold up either. > Those are neuro-nets I infer, and they are merely one statistical tool, in an arsenal of learning. Multiple forms of learning, in combination with strong core for knowledge representation is necessary to achieve general intelligence. > Neither system have, in themselves, proven sufficient to resolve the > problem. My feeling is that the recognition that it takes many > references to a concept to ‘understand’ that concept is part of the key to > resolving these problems without hoping to rely on a method that suffers > from combinatorial complexity. > programming languages and operating systems don't suffer from combinatorial complexity, or if they do, it is well managed, yet they are the most generally intelligent things/thought-systems on computers. > Another part of the key is to recognize that concept objects may contain > numerous lateral similarities to other concept objects and that these > similarities may run across the dominant categories of a concept object > that is being examined. > > > > Jim Bromer > > On Mon, Apr 22, 2013 at 6:01 PM, Jim Bromer <[email protected]> wrote: > > I think just skimmed through the outline html -- it seems like a good > > start. I wouldn't start writing any code for quite a while yet. It > > seems to me that you need to fight with those issues first. > > > Thanks for the friendly comment, but I am going to push myself to > start coding (experimenting) next month. > Great! the sooner the better. > Formal methods have to be tried and shaped based on extensive applications > of the methods to real world problems. By doing some programming, you'll gain some insights into how computers think. Also you'll learn about how to think more logically and rationally. > I am thinking of starting with simple simulations to see if I can > eventually find some formal methods (programmable methods) that can work > with the kinds of problems that I will throw at it. What would you be simulating? > If I don't make any progress with that then I might try creating a > language which is designed to be extensible via generalizations. > Jim Bromer > Didn't you say generalizations failed in the 60's? Did you know, that much like people, programming languages, are extensible, through the use of libraries .i.e. books of information. > > > > ------------------------------ > Date: Sat, 20 Apr 2013 10:28:44 -0400 > Subject: Re: [agi] Summary of My Current Theory For an AGI Program. > From: [email protected] > To: [email protected] > > > > > On Sat, Apr 13, 2013 at 6:39 AM, Jim Bromer <[email protected]> wrote: > > Part 1 > I feel that complexity is a major problem facing contemporary AGI. It is > true, that for most human reasoning we do not need to figure out > complicated problems precisely in order to take the first steps toward > competency but so far AGI has not been able to get very far beyond the > narrow-AI barrier. > I am going to start with a text-based AGI program. I agree that more > kinds of IO modalities would make an effective AGI program better. However, > I am not aware of any evidence that sensory-based AGI or multi-modal > sensory based AGI or robotic based AGI has been able to achieve something > greater than other efforts. The core of AGI is not going to be found in the > peripherals. And it is clear that starting with complicated IO > accessories would make AGI programming more difficult. It seems obvious > that IO is necessary for AI/AGI and this abstraction is a probably more > appropriate basis for the requirements of AGI. > My AGI program is going to be based on discreet references. I feel that > the argument that only neural networks are able to learn or are able to > incorporate different kinds of data objects into an associative field is > not accurate. I do, however, feel that more attention needs to be paid to > concept integration. And I think that many of us recognize that a good > AGI model is going to create an internal reference model that is a kind of > network. The discreet reference model more easily allows the program to > retain the components of an agglomeration in a way in which the traditional > neural network does not. This means that it is more likely that the > parts of an associative agglomeration can be detected. On the other > hand, since the program will develop its own internal data objects, these > might be formed in such a way so that the original parts might be difficult > to detect. With a more conscious effort to better understand concept > integration I think that the discreet conceptual network model will prove > itself fairly easily. > > > Yep, so can do concept integration, by representing concepts with > sentences. > It works, it's simple, it can show association, it maintains original > parts. > > > > I am going to use weighted reasoning and probability but only to a limited > extent. > > > > > On Sat, Apr 13, 2013 at 7:34 AM, Jim Bromer <[email protected]> wrote: > Part 2 > > I believe that it takes a great deal of knowledge to 'understand' one > thing. A statement has to be integrated into a greater collection of > knowledge in order for the relations of understanding to be formed. > > Just like how a sentence can be integrated into a story. > > > And the knowledge of a single statement has to be integrated into a > greater field of knowledge concerning the central features of the subject > for the intelligent entity to truly understand the statement. While > conceptual integration, by some name, has always been a primary subject in > AI/AGI, I think it was relegated to a subservient position by those who > originally stressed the formal methods of logic, linguistics, psychology, > numerics, probability, and neural networks. Thinking that the details of > how ideas work in actual thinking was either part of some > predawn-of-science-philosophy or the turn-of-the-crank production of the > successful application of formal methods, a focus on the details of how > ideas work in actual problems was seen as naïve. > > > Many people understand things through story. Indeed it is the way in which > our brains are designed to operate and interpret new information, since the > cave paintings at the very least. > > It's also the most effective way of transmitting information from one > person to another, as it often bypasses much of the conscious criticality, > and simply subsumes into the subconscious background. > > Even computers understand things through story, even thought the typical > programming language may make this hard to see, however the setting or > variables are declared intially, the rising action is the preperation of > the variables for interaction, then the conflict or change is the actual > transmutation of the variables, and resolution is the returning of the > result. > > > This problem, where the smartest thinkers would spend lives pursuing the > abstract problems without wasting their time carefully examining many real > world cases occurs often in science. It is amplified by ignorance. If > no one knows how to create a practical application then the experts in the > field may become overly pre-occupied with the proposed formal methods that > had been presented to them. Formal methods are important - but they are > each only one kind of thing. It takes a great deal of knowledge about > many different things to 'understand' one kind of thing. A reasonable > rule of thumb is that formal methods have to be tried and shaped based on > exhaustive applications of the methods to real world problems. > > In order to integrate new knowledge the new idea that is being introduced > usually has to be verified using many steps to show that it holds. > > > well there is always parsing, and compiling, if it works it works. Though > factual information about the world could be statistically > cross-referenced. > > > Since there is no absolute insight into truth for this kind of thing, > knowledge has to be integrated in a more thorough trial and error manner. > > > truth is personal experience, so each perspective has it's own truth. > knowledge is past-experience, there is true knowledge, and real knowledge. > real is the set of beliefs that are in common amongst a group. > exist is anything that can be imagined/compiled. > > > The program has to create new theories about statements or reactions it > is considering. This would extend to interpretations of observations for > systems where other kinds of sensory systems were used. A single > experiment does not 'prove' a new theory in science. A large number of > experiments are required and most of those experiments have to demonstrate > that the application of the theory can lead to better understanding of > other related effects. > > > you know it all depends, are we making little scientists, or AGI's? > Is there purpose to "prove" stuff, or rather simply to "do" stuff. > Sure they could use some scientific method, and statistical verification, > however it's more important to actually get stuff done i.e. result of > experiment, than have it published in a peer-reviewed journal, or get a > bunch of peers to re-do the experiments. > The experiment and results could be shared with other AGI's and people > over the internet, likely in the form of a story, if others come across > similar issues they may wish to try it themselves, and comment if it works > out for them. > > > It takes a knowledge of a great many things to verify a statement about > one thing. In order for the knowledge represented by a statement to be > verified and comprehended it has to be related to, and integrated with, a > great many other statements concerning the primary subject matter. It is > necessary to see how the primary subject matter may be used in many > different kinds of thoughts to be able to understand it. > > > I disagree, as you don't have to know all the ways to add stuff, to > simply add some numbers together. > You can easily learn new things later on, like how to perform addition > amongst new types of things, like for instance arrays, or ingredients. > Gotta start somewhere, and arithmetic addition is a sufficient place to do > so. > > > > > > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/5037279-a88c7a6d> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > > > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/24379807-f5817f28> | > Modify <https://www.listbox.com/member/?&> Your Subscription > <http://www.listbox.com> > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/5037279-a88c7a6d> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
