On Tue, Apr 23, 2013 at 11:04 AM, Jim Bromer <[email protected]> wrote:

> Logan said:
>
> By doing some programming, you'll gain some insights into how computers
> think.
> Also you'll learn about how to think more logically and rationally.
>
> I hope so.
>
> Logan said:
>  generally you need to write and interpreter or compiler, to "understand"
> i.e. compile or interpret a statement.
>
> You need to write something that will "understand" or interpret statements
> but the question is how do you do that so that it actually works.
>

It's been done before Many Many Many times, Java, C, C#, Scheme, Haskell,
Perl, PHP, HTML   all of them have either interpreters or compilers that
work.


> My theory is that it takes many statements to "understand" one statement.
>

Yes that is the case, in the sense it takes more than one statement to
write a compiler or interpreter.



>   Some of the statements may refer to incidental associated information
> and some may refer to information about usage and so on.  Furthermore, you
> need contextual information about an ongoing conversation and what some of
> the consequences of interpreting a sentence in a certain way may be.  It is
> not a straightforward problem.  Anaphora-like relations, for example, can
> change the meaning of an apparent object of an indefinite article which
> means that a sub-sentence which is exactly the same can refer to a
> broad range of a-kind-of-object in one sentence and to a very particular
> object in another.  It takes many statements, some of which will refer to
> how linguistic objects are typically used, to "understand" a single simple
> statement.
>

If you understood what an interpreter or compiler was, and how it works,
you'd have a much clearer idea.


>
> I had said:
> So this means that it can be very difficult to determine the meaning of a
> combination of concepts if the program does not explicitly contain a
> reference to that particular combination.
>
> Logan replied:
> That is completely false, it's like saying computer-programming languages
> contain references to every particular combination, when in fact you only
> need to understand the sub-components. Similar to how you don't need to
> know every story in conceivability to listen to a new story and derive
> meaning from it. In fact the very process of acquiring new information
> falsifies your hypothesis.
>
> The mysteries of the capabilities of human intelligence do not
> automatically falsify hypotheses about the problems of artificial
> intelligence on a computer.
>

Er, no I'm talking about computer programing compilers and interpreters.
I'm simply also mentioning that humans have a similar capacity.


> That is a serious logical error in reasoning.  You cannot transcend the
> boundaries of two very distinct reference subjects without recognizing that
> the argument from one does not necessarily hold for the other. (In some
> discussions that would be ok but there is no reason to believe that my
> reference to "the program" referred to human abilities.)
>
>
> I agree that the ability to ask questions and search through external
> sources of information would allow the program to redirect its search and
> help it to avoid search complexities in some cases.
>
> The simplistic use of generalizations in the 60's did not work to produce
> AI, and different kinds of weighted reasoning in the 70s and the 80s did
> not work either.  Weighted Reasoning can refer to a number of different
> paradigms.  Putting weights on statements is one kind (John Anderson),
> Neural Networks is another and Bayesian Networks is another.
>
> The simulation I plan to start with will use a constrained language of
> 100-200 words.  I will start by explicitly directing the program
> (algorithmically) to produce the kinds of data structures that I have  in
> mind for the program, then I will see if I can write the subprograms which
> could use those kinds of data relations to determine what an input sentence
> is referring to.
>

Seriously, do some research on what a compiler or interpreter is,
that way you wont be reinventing the wheel as it seems to be what you're
doing.


> I will start with something simple and if I make some progress then I will
> try something a little more difficult.  I plan to learn a great deal from
> this process and I expect that my theories about AGI will become stronger.
>
> Jim Bromer
>

Of course if you know more, they'll be stronger.

>
>
> ------------------------------
> Date: Tue, 23 Apr 2013 07:13:11 -0400
>
> Subject: Re: [agi] Summary of My Current Theory For an AGI Program.
> From: [email protected]
> To: [email protected]
> On Mon, Apr 22, 2013 at 4:13 PM, Jim Bromer <[email protected]> wrote:
>
> Logan,****
> Thanks for your comments.
>
> I agree of course that concepts and concept integration may be represented
> by words and sentences.  I was trying to say that many of the
> complications that will arise using word-concepts will arise using some
> other kinds of referential concepts.  One of the reasons that I am
> convinced that text-only AGI is a good way to go is because there is such a
> potential for expressiveness and the representation of different kinds of
> ideas.  It is often difficult to express complicated ideas using words
> because they are not substitutes for the implementations of the things that
> we are talking about.
>
>
> We can implement anything using words, from programs through bridges to
> relationships.
>
> However, that does not mean that they cannot be used as representations of
> ideas.  I understand what I am talking about even though other people do
> not.****
> **
>
>
> That simply indicates a need to enhance your communication skills.
>
>  **
> I believe that when we acquire a learned habit the parts of the habit may
> not be directly understandable but can only be approached indirectly by
> referring to something else.  For instance a learned action may be
> created by a string of action potentials (for a lack of a better name) and
> it may be that the only way to detect the parts of the string is by noting
> the whole, more complicated action.
>
>
> Ya, many voice to text parsers work like that, however they aren't very
> good at  understanding new phrases, or different ways of saying things.
> Both are necessary, likely with some supervised learning i.e. "what did you
> mean by that?" giving a target, for optimal results.
>
>
> Or we may infer the action by some other action or other event that is
> roughly correlated with the inferred action.  But essentially, when we
> are capable of reflection (meta-cognition) we are able to ‘understand’ a
> concept potential if we know something more about how to use and integrate
> the concept.  So by having some kind of understanding of a concept
> potential we can consciously try to use it in different ways based on some
> kind of reasoning.  Now, if are not explicitly aware of the concept
> potential there may be a chance that we can infer something about it
> indirectly just as we might infer something about an action potential.****
> ** **
> I believe that the theory that it takes many statements to understand one
> simple statement has a great deal of value.
>
>
> generally you need to write and interpreter or compiler, to "understand"
> i.e. compile or interpret a statement.
>
> Concepts are relativistic.  That means that when a simple concept is used
> in association with other concepts the meaning of the concept can vary.  
> Concepts
> are contextual.  But there are more problems.  Concepts are
> interdependent.  There is not (necessarily) an independent concept and a
> dependent concept in a conceptual function the way there are in a
> mathematical function.
>
>
> Actually there are a whole host of such axiomatic concepts.  If there
> weren't we'd just be wishy washy not really saying anything all the time.
>
>
> So this means that it can be very difficult to determine the meaning of a
> combination of concepts if the program does not explicitly contain a
> reference to that particular combination.
>
>
> That is completely false, it's like saying computer-programming languages
> contain references to every particular combination, when in fact you only
> need to understand the sub-components. Similar to how you don't need to
> know every story in conceivability to listen to a new story and derive
> meaning from it. In fact the very process of acquiring new information
> falsifies your hypothesis.
>
>   One way to work with this problem is to rely on generalization systems
> in which the systems of generalizations of a collection of concepts can be
> used to guide in the decoding of a particular string of concepts which
> haven’t been seen before.  However, when this was tried in the simplistic
> fashion of the discrete text based programs of the 60’s it did not
> produce intelligence.
>
> can you give some examples?
> Cause C, fortran and a host of other discrete text string concepts
> happened and seem to have produced significant intelligence, i.e. beating
> world chess champions among a multitude of other achievements.
>
> So in the 70’s weighted reasoning became all the rage because it looked
> like it might be used to infer subtle differences in the strings that
> simple discretion substitution did not.  However, this promise did not
> hold up either.
>
>
> Those are neuro-nets I infer,  and they are merely one statistical tool,
> in an arsenal of learning.  Multiple forms of learning, in combination with
> strong core for knowledge representation is necessary to achieve general
> intelligence.
>
> Neither system have, in themselves, proven sufficient to resolve the
> problem.  My feeling is that the recognition that it takes many
> references to a concept to ‘understand’ that concept is part of the key to
> resolving these problems without hoping to rely on a method that suffers
> from combinatorial complexity.
>
>
> programming languages and operating systems don't suffer from
> combinatorial complexity,  or if they do, it is well managed, yet they are
> the most generally intelligent things/thought-systems on computers.
>
>
> Another part of the key is to recognize that concept objects may contain
> numerous lateral similarities to other concept objects and that these
> similarities may run across the dominant categories of a concept object
> that is being examined.
>
>
> Jim Bromer
>
>
>
>
> On Mon, Apr 22, 2013 at 6:01 PM, Jim Bromer <[email protected]> wrote:
>
> > I think just skimmed through the outline html -- it seems like a good
> > start. I wouldn't start writing any code for quite a while yet. It
> > seems to me that you need to fight with those issues first.
>
>
> Thanks for the friendly comment, but I am going to push myself to
> start coding (experimenting) next month.
>
>
> Great! the sooner the better.
>
>
> Formal methods have to be tried and shaped based on extensive applications
> of the methods to real world problems.
>
>
> By doing some programming, you'll gain some insights into how computers
> think.
> Also you'll learn about how to think more logically and rationally.
>
> I am thinking of starting with simple simulations to see if I can
> eventually find some formal methods (programmable methods) that can work
> with the kinds of problems that I will throw at it.
>
>
> What would you be simulating?
>
>
>   If I don't make any progress with that then I might try creating a
> language which is designed to be extensible via generalizations.
> Jim Bromer
>
>
> Didn't you say generalizations failed in the 60's?
> Did you know, that much like people,
> programming languages, are extensible,
> through the use of libraries .i.e. books of information.
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/5037279-a88c7a6d> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to