Steve,
So you are defining a numerical system (like a vector system) using the
most significant semantic units?  I could see how that (or some other
numerical system that used a finite number of defined semantic units) might
be very effective doing some kind of fundamental parsing. I don't think any
systems like this would be very useful for general AGI because I think the
system would have to be capable of learning millions of sub-cases, like how
particular people use words in various circumstances.

Are you talking about something like that?
Jim Bromer

On Mon, Mar 25, 2013 at 12:57 PM, Jim Bromer <[email protected]> wrote:

> On Fri, Mar 22, 2013 at 6:16 PM, Steve Richfield <
> [email protected]> wrote:
>
>> PM,
>> Reading these, I can see that:
>> 1.  Working with ordinals would speed this process up by more than an
>> order of magnitude in performing *exactly* the same analysis, over
>> working with character strings...
>>
>
> Could you explain this kind of remark to me.  I haven't been able to
> figure out a way to make any kind of numeric method work well over the
> general kinds of relations that you'd expect to encounter in AGI.  If all
> systems had a direct correspondence to a dimensional system then you could
> get some traction out of these things.  Or, if general reasoning did not
> need to rely both on intersections and simple arithmetic (or simple logic)
> then numeric methods would be extremely efficient.
>
> Jim Bromer
>
>
>
> On Fri, Mar 22, 2013 at 6:16 PM, Steve Richfield <
> [email protected]> wrote:
>
>> PM,
>>
>> Reading these, I can see that:
>>
>> 1.  Working with ordinals would speed this process up by more than an
>> order of magnitude in performing *exactly* the same analysis, over
>> working with character strings, e.g. in LISP.
>>
>> 2.  The first book describes a system that finds itself "in the weeds"
>> with the first syntactical break in a sentence, where it "jumps to
>> confusions" by presuming the next word to be the beginning of a new
>> sentence (when more likely a presumed noun was missing), an issue that the
>> second book apparently seeks to address.
>>
>> 3.   In dealing with the ontological and other subtle issues, the method
>> described in the 2nd book will have to make the SAME tests that any other
>> system would have to make to see if particular semantic structures are
>> present. What good is it to avoid semantic structures, only to have to
>> later analyze them?
>>
>> Note that WolframAlpha.com and DrEliza.com don't even bother "parsing" in
>> the same sense as Hausser uses the term, and instead only work with
>> identifiable semantic units. These applications have little use for such
>> information. I have looked at what the big costs are in DrEliza.com from
>> the lack of full parsing. The primary problem is that it is now blind to
>> whether someone is describing their own problems, or someone else's
>> problems. Also, when negation meets compound and complex sentence
>> structure, the logic in DrEliza.com would more likely misunderstand it than
>> get it right, and so I have disabled acting on such sentences.
>>
>> Note that improper multiple negation is SO common in everyday English
>> that correct parsing is as likely as not to arrive at the wrong meaning.
>>
>> I think the "break" in this discussion is that almost everyone's ultimate
>> goal is to identify the semantics, whereas this method identifies the
>> syntax. A presumption has been made that parsing syntax is a necessary step
>> on the way to recognizing semantics, which is clearly made here in
>> rejecting the analysis of semantic structures. Unfortunately, semantics is
>> EXACTLY what most applications need from a parser, so this "hole" must then
>> be filled in later in the analysis, and filling this hole in will slow this
>> approach down, exactly as it slows other approaches down.
>>
>> It is ALWAYS faster to skip the hard stuff, which is really great if you
>> don't need it. I can see Hausser's approach working as part of a language
>> translator, ESPECIALLY for scientific material like the Russian Academy of
>> Sciences is now working on, where the translation wants to AVOID semantic
>> analysis as much as possible. A computer can potentially only "understand"
>> things that are already known, whereas the entire object of scientific
>> papers is to explore the *UN*known.
>>
>> On a side note - note the copious spelling errors. There couldn't have
>> been much review of this material. If the author can't even get his friends
>> to read his writings...
>>
>> Did I miss anything?
>>
>> Steve
>> =================
>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to