Jim,

OK, maybe a simple thought experiment will help. Presume for a moment that
the system you imagine exists here and now. You have just turned it on.
Now, start typing (or talking) to make it do what you want it to do:





*Steve*

On Sun, Nov 8, 2015 at 11:58 PM, Jim Bromer <[email protected]> wrote:

> Steve,
> I do not understand how you can misunderstand what I am talking about
> so badly since I have posted hundreds if not thousands of messages on
> groups like this.
> I am not planning on getting a computer program to learn by reading
> the internet with the expectation of having it achieve human like
> abilities. The supposition that that is what I am talking about
> strikes me as nonsensical.
>
> Unfortunately, I probably will not be able to do much at all but I
> hope to at least experiment with a higher learning mechanism that, for
> example, can be taught through conversation. In order to achieve this
> I am planning on using a parallel referent language which will be
> designed to shape knowledge about some things in ways that would be
> like rough prototypes of the kinds of ideas that I have in mind for an
> AI / AGI program.
>
> I agree that in order to 'understand' text the program would need to
> build models about reality. However, I am in total disagreement with
> the idea that an AI program would need to have some sort of embodiment
> to understand text about the world. Such thinking strikes me as
> absurdly naïve. The reason is that I believe there are very basic
> parts of AI methods that are almost totally missing in contemporary AI
> programs and the fantasy that if you could just slap some robotics
> onto one of these primitive AI designs then the program would
> magically be more capable of understanding is somewhere near the
> height of absurdity.
>
> So I want to try to improve on the AI programming by working on a
> design that would be able to integrate simple knowledge or knowledge
> about simple things together. I don't think deep learning is good
> enough for this because deep learning neural networks have a such a
> crude way of going about this integration of related ideas. Of course,
> I am aware that there are other problems with complexity that are
> difficult to work around. I have been trying to find a polynomial time
> solution to SAT for the past few years because I thought that would
> help me with complexity. However, I haven't been able to make any real
> progress on that and I just don't see how the human mind could
> possibly be using p=np in some way given the propensity of 'organic'
> processes to generate unique manifestations of very irregular
> patterns.
>
> So back to the main topic of my thought. Interactions with the sensory
> world of human beings is not necessary for AI because the information
> that comes in from those senses would have to be integrated well to
> produce 'understanding' just as information that comes in through the
> keyboard would need to be. It is my belief that smart conceptual
> integration is an outstanding problem in contemporary AI and it is a
> problem I can work on.
>
> Unfortunately I won't be able to do much work on the problem. Perhaps
> I squandered too much time writing in groups like this and working on
> my p=np? project. On the other hand, I hadn't really settled down on
> what I think is the best way to approach the problem until the last
> few years. But I would at least like to try to do something.
> Jim Bromer
>
>
> On Sun, Nov 8, 2015 at 5:16 PM, Steve Richfield
> <[email protected]> wrote:
> > Jim,
> >
> > You seem to be suffering from a malady that many others on this forum
> appear
> > to be suffering from. The primary symptom is the belief that simple
> > observation of HIGHLY noise-ridden text can lead machines to useful
> > epiphanies.
> >
> > The problem is that pretty much everything people write is, in a word,
> > wrong. Overbroad statements (and natural languages can NOT precisely
> bracket
> > statements), statements made based on unstated presumptions (and no one
> can
> > list all their presumptions), statements made based on faulty models (and
> > ALL models are faulty, as every physicist knows), confused meanings or
> words
> > (most words have multiple meanings), lies made for economic or other
> gain,
> > etc., etc., etc. It is difficult to find ANY clear statements that are
> > unquestionably accurate in all of their potential meanings.
> >
> > OK, so suppose you accept the above and simply want to do the best you
> can.
> > Still, you can NOT approach the capabilities of an average person,
> because
> > we have a real world in which to test the many possible interpretations
> of
> > what we read, whereas a machine can only accept, reject, or assign a
> > probability with NO other information.
> >
> > I used to believe as you do - until 2001 when I became very sick. It
> took me
> > 3 months of every conscious minute to figure out what was wrong and what
> > might be done about it. Sure I got most of my information from books or
> the
> > Internet, but there remained several very different models in which this
> all
> > fit, which I resolved with phone calls to key researchers, and a little
> of
> > my own primary experimentation to sort the flyshit from the pepper.
> Still, I
> > remained unsure until what should work did work to cure my condition.
> >
> > When I went back to figure out how the Internet might be reorganized to
> > reduce my search from 3 months to a few minutes, the structure of DrEliza
> > emerged, and later my patent.
> >
> > The problem is that to usefully solve problems you need MODELS, yet
> natural
> > language (and most human thinking) predates this concept and only
> provides
> > INFORMATION. Sometimes you can construct a model from information, but
> this
> > is RARE. Making models requires highly qualified geniuses who can get
> their
> > arms around entire fields and synthesize models that fit ALL known
> > observations. I have done this in several narrow areas, but it will take
> a
> > LOT more of this to transform the Internet to a model-based system from
> > which an AI/AGI can usefully address problems of all types.
> >
> > Further, natural language is highly granular - there are far more things
> > that you can NOT say than things you CAN say, so people without even
> > thinking about it round to the nearest syntactically expressible meaning
> in
> > EVERYTHING they say or write. This "rounding" completely destroys any
> > ability to construct accurate models.
> >
> > Some languages have weak workarounds to this, e.g. German with its
> > concatenated words, or Arabic where they alter spellings for emphasis,
> but
> > these measures only slightly reduce their granularity.
> >
> > All in all, each person here must either find a way to jump from
> information
> > to models, or abandon this quest. Sure, information can help people
> solve a
> > problems whose solution has already been stated, but there are already
> > plenty of experts around who do this quite well. It is the UNsolved
> problems
> > that are interesting, and it is these UNsolved problems that can NOT
> EVER be
> > solved by automated means based on people's writings.
> >
> > How can I say not EVER when writings continue to accumulate? Because
> > society's problems also continue to accumulate, so as people find
> solutions
> > to past unsolved problems, even more new problems emerge to replace them.
> >
> > I hereby proclaim your apparent quest to be theoretically unachievable
> for
> > the many reasons outlined above. Sure it would be of astronomical value,
> > like the methodology to change lead into gold that so many people put so
> > much effort into, but why waste your time unless/until you can find SOME
> way
> > around the above-listed barriers.
> >
> > Steve
> > =====================
> >
> > On Sun, Nov 8, 2015 at 9:00 AM, Jim Bromer <[email protected]> wrote:
> >>
> >> Steve,
> >> It will take me some time to reply carefully so let me respond to
> >> something I feel strongly about.
> >>
> >> >>And because it is not an all encompassing
> >> language of communication it could be used to test the 'emergence' of
> >> insight that could arise if enough preparatory work had been done,
> >> even if I haven't figured out how that could be done without the
> >> artificial referent language.
> >> >>
> >>
> >> >There is a VAST chasm between being able to define language
> constructions
> >> > and meanings, and "insight".
> >> >
> >>
> >> I believe there is a vast chasm between 'simple associations' or 'simple
> >> correlations' or associations derived from 'neural networks' and
> conceptual
> >> integration. Sophisticated artificial conceptual integration would make
> >> 'insight' feasible and simple examples across a wide range of subject
> matter
> >> should arise fairly quickly. But since AI programs are only capable of
> the
> >> simplest examples of 'insight' then declarations about the chasm
> between AI
> >> and 'insight' are expected. So I totally disagree with you about this. I
> >> feel that your feelings about this are historically accurate but have
> little
> >> to do with the potential near-future. As I say, I do not recall hearing
> >> about an AI program that is capable of learning via conversation except
> for
> >> extremely simple domains. I feel that I have a solution for this
> problem but
> >> the trial and error process of getting from where I am now and where I
> think
> >> I can get is so overwhelming a challenge that my decision to use the
> >> artificial referent para-language makes a sense.
> >>
> >> Jim Bromer
> >>
> >> On Sun, Nov 8, 2015 at 11:19 AM, Steve Richfield
> >> <[email protected]> wrote:
> >>>
> >>> Jim,
> >>>
> >>> FINALLY - SOMEONE who wants to discuss PRACTICAL implementations of
> >>> TAI/TAGI.
> >>>
> >>> Continuing...
> >>> On Sun, Nov 8, 2015 at 7:14 AM, Jim Bromer <[email protected]>
> wrote:
> >>>>
> >>>> After I wrote that message I realized that I had tried to start
> >>>> discussions about an artificial language that could be used to shape a
> >>>> general AI program before. Many of these discussions were side tracked
> >>>> when people started talking about Esperanto or about lambda calculus
> >>>> based artificial languages and stuff like that. That is not what I am
> >>>> thinking of.
> >>>
> >>>
> >>> You mean, having syntax like:
> >>>
> >>> When that I write "xxxx" I mean "yyy".
> >>>
> >>> to define idioms, for more subtle things like:
> >>>
> >>> Consider that when I write "," I may mean ";".
> >>>
> >>> which expresses potential alternative interpretations of future
> writings?
> >>>>
> >>>>
> >>>> The artificial language could be used with video or audio or other
> >>>> kinds of IO environments, but I would use it along side of an attempt
> >>>> to get the AI program to learn to use a natural language.
> >>>
> >>>
> >>> I did a VERY similar thing in a FORTRAN/ALGOL/BASIC compiler I once
> wrote
> >>> for Remote Time-Sharing Corp. It started out as a very simplistic
> >>> metacompiler, to which I fed it a description of a more capable
> >>> metacompiler, in which language I fed it a description of an optimizing
> >>> metacompiler.
> >>>
> >>> This could easily be done in a language like English, where a
> rule-driven
> >>> system like I have been discussing here has rules whose function is to
> >>> introduce new rules.
> >>>
> >>>>
> >>>> One of the
> >>>> dreams of old AI was that if you started instructing the program to
> >>>> learn using the artificialities of some kind of language it would
> >>>> eventually have enough information for genuine learning to emerge.
> >>>
> >>>
> >>> The think that seems to be the killer here is erroneous learning of
> >>> various sorts. Superstitious learning is theoretical unavoidable. Once
> you
> >>> get something erroneous into such a system, it becomes
> difficult/impossible
> >>> to get it out. A VERY simple demonstration comes in trying to use
> Dragon
> >>> NaturallySpeaking's speech input to correct its errors in your
> dictation. As
> >>> you would expect it makes errors in trying to correct the errors, and
> this
> >>> often compounds to overwhelm any hope of setting things right.
> >>>
> >>> Add to that not knowing exactly what a computer got wrong, or even
> being
> >>> able to recognize that the computer got something wrong, and you can
> see how
> >>> difficult/impossible it is to correct wrongly "learned" rules.
> >>>
> >>>>
> >>>> This never really worked. Why not? Partly because computers were not
> >>>> powerful enough in the old days
> >>>
> >>>
> >>> And still aren't - unless you use my patented LFU methodology.
> >>>
> >>>>
> >>>> and, in my opinion, AI researchers had
> >>>> not appreciated the necessity of sophisticated data integration
> >>>> methods for some reason. (Old computer systems might one day be shown
> >>>> to have been potentially powerful enough to run some future program
> >>>> but they were not powerful enough to entertain the trial and error
> >>>> process that would have been required using experimental programs of
> >>>> the day.
> >>>
> >>>
> >>> The advantage in LFU is about the same as the advantage of a modern PC
> >>> over an old vacuum tube clunker, so yes, they could have done a LOT
> more way
> >>> back then.
> >>>
> >>> The "cycle time" of an IBM-709 computer was 12 microseconds, and most
> >>> instructions took two cycles - one to access and interpret the
> instruction,
> >>> and one to access and operate on the operand.
> >>>
> >>>>
> >>>> For example, with better conceptual integration methods a
> >>>> future efficient AI program might be used on an old computer system
> >>>> just to show that it could be run on it.)
> >>>
> >>>
> >>> No, except for a few in the Computer Museum's display in Cupertino they
> >>> have all been melted down for their scrap metal, and the Museum won't
> turn
> >>> them back on.
> >>>>
> >>>>
> >>>> So the artificial referent language would not be a complete language
> >>>> (of communication) like Esperanto wants to be. And it would not be a
> >>>> logically sound language like lambda calculus wants to be. It could be
> >>>> used to establish referents from the IO data environment. It would
> >>>> need to be capable of denoting a distinction between how those data
> >>>> objects can be used. For example in natural language there is an
> >>>> important distinction between syntax and semantics. So if I used this
> >>>> referent language with a natural language IO then one of the
> >>>> artificialities would be to distinguish syntactic relations from
> >>>> semantic relations. On the other hand, this distinction is not always
> >>>> necessary, desired or clear cut. To explain this, many (or maybe most)
> >>>> (what I think are) desirable syntactic relations are based on some
> >>>> semantic conditions. But then again there is no reason not to design
> >>>> the artificial language to be able to represent relations that are
> >>>> mixes of semantics and syntax.
> >>>
> >>>
> >>> Leaving a stupid computer to untangle such messes is probably a
> mistake.
> >>> However, it would be fairly easy to provide a mechanism for people to
> >>> specify such things.
> >>>>
> >>>>
> >>>> As I see it, the main problem with language based AI has been the lack
> >>>> of a really good conceptual integration solution.
> >>>
> >>>
> >>> This broad statement could be said about ANYTHING people haven't yet
> seen
> >>> a way to make work - like AGI.
> >>>>
> >>>>
> >>>> One of the reasons I write to groups like this is that I want to get
> >>>> some ideas about how an idea might work.
> >>>
> >>>
> >>> Same here.
> >>>
> >>>>
> >>>> But when I wrote about an
> >>>> artificial para-language before I wasn't really sure it I even wanted
> >>>> to use it. I finally have come to the conclusion that it makes a lot
> >>>> of sense. I can use it to speed up tests about my AI/AGI theories but
> >>>> then I could also test those theories with more relaxed instructions.
> >>>> So the artificial para-referent language would not a all encompassing
> >>>> language of communication, it would not be a logically sound language
> >>>> other than to denote semantic and syntactic references and relations
> >>>> based on mixes of semantic and syntactic references. It could also
> >>>> denote relations that I think would be important to a text-based
> >>>> AI/AGI program. Because the logic of the method would not be tight and
> >>>> a contradicting case would not (always) lead to an artificially
> >>>> reported error, the AI methods would have to do some learning for
> >>>> itself. So the para-referent language would not sidetrack the whole
> >>>> effort because if the AI methods have to have the potential to exhibit
> >>>> some genuine learning. And because it is not an all encompassing
> >>>> language of communication it could be used to test the 'emergence' of
> >>>> insight that could arise if enough preparatory work had been done,
> >>>> even if I haven't figured out how that could be done without the
> >>>> artificial referent language. The benefit is that I could use it to
> >>>> test and develop my AI theories. I am really excited by this idea this
> >>>> time.
> >>>
> >>>
> >>> There is a VAST chasm between being able to define language
> constructions
> >>> and meanings, and "insight".
> >>>
> >>> Steve
> >>> ======================
> >>>>
> >>>> Jim Bromer
> >>>>
> >>>>
> >>>> On Sat, Nov 7, 2015 at 10:22 PM, Jim Bromer <[email protected]>
> wrote:
> >>>> > I was just working on my latest p=np? idea and I hit up against
> method
> >>>> > that is either in np or is otherwise extremely inefficient. So I
> have
> >>>> > to come to the conclusion that the human mind is not capable of SAT
> in
> >>>> > p.
> >>>> >
> >>>> > So then how do we figure how to deal with so many complicated
> >>>> > situations? Of course I still don't know because so many situations
> >>>> > seem similar to a SAT problem. The mind must be able to detect many
> >>>> > different things that are going on at once or which might be useful
> to
> >>>> > recall from memory to deal with a situation. But still, there is
> >>>> > nothing in my own introspective analysis of my thinking which looks
> >>>> > anything like a p=np process.
> >>>> >
> >>>> > So what is wrong with AI? One thing that AI has been consistently
> >>>> > lacking is the ability to learn through conversation. My feeling is
> >>>> > that this is not just a problem with communication but a learning
> >>>> > problem as well. In other words AI is not able to truly learn except
> >>>> > in a few special cases. Most of those special cases are examples of
> >>>> > narrow AI but there are others where the learning that takes place
> >>>> > isn't necessarily like other narrow AI but where the domain of
> >>>> > learning is so restricted that it is narrow in the sense that the
> >>>> > applicability of the method is limited.
> >>>> >
> >>>> > Then I started thinking of an artificial language which can refer to
> >>>> > situations or objects in the IO data environment and which can be
> used
> >>>> > to instruct a program as it is running. I think this is an unusual
> >>>> > idea.
> >>>> >
> >>>> > One of the characteristics about programming methods that seem to
> >>>> > catch on with programmers is that they can be used in a very simple
> >>>> > manner and in more complicated programming. I think an artificial
> >>>> > language which could be used to instruct a computer to notice
> objects
> >>>> > in the IO data environment and which could also be used to refine
> >>>> > those instructions using this artificial language with the
> references
> >>>> > that it had previously established has a lot of potential. And it
> can
> >>>> > help us become more clear about what is needed to make better AGI
> >>>> > programs.
> >>>> > Jim Bromer
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Full employment can be had with the stoke of a pen. Simply institute a six
hour workday. That will easily create enough new jobs to bring back full
employment.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to