Hello Michael,Thank you for your email, your paper, and the discussion 
betweenJim, Steven, and Boris.  It'll take me a few days to look at yourpaper, 
but before there are too many more contributions to the ongoing discussion let 
me respond to some items:1.  Many teachers have recorded a classroom 
presentation and    transcribed the recording, only to be quite surprised at 
what     they actually said...This is very true of spoken language and requires 
what's calledrobustness.  In DBS it is supplied by the lowest level of 
patternmatching, which correlates the core values in the spoken text to 
corresponding contents in the database.  The amount of contentcoactivated in 
this way is reduced by inferencing.  (see Sect. 11in the paper, FoCL 6.1.2, 
CLaTR Sect. 5.4)2. For all but AGI (that can't work for decades with any 
presently    known approach because of a lack of processor power) and automatic 
   language translation (that has a large interest in preserving the    
speaker/writer's frame of mind), there seems to be little real-world    
application for agent-oriented approaches.It seems to me that a computational 
theory of any kind should takecare to be of low mathematical complexity (linear 
or at worst polynomial).As Garey and Johnson showed in 1979 (FoCL 8.2.2), an 
algorithm may be decidable, but if it is exponential it may take longer than 
the existence of the universe, currently estimated at 3.77 billion years.  So 
that wouldn't be helped by faster machines.As for applications of DBS, please 
see Sect. 13 in the paper.3. Summarizing from what I read from the discussion 
between Jim,   Steve, and Boris, it seems an open question whether computers    
can *understand* natural language and engage in meaningful   dialog.  DBS takes 
the view that full understanding by a computer requires an agent with a body in 
the real world, with interface for recognitionand action.  It can use the 
elementary recognition (e.g., red) and action procedures (e.g., take one step 
forward) as its basic concepts for building content, and reuse the associated 
types as the meaning of natural language content words. It is possible to move 
from such a talking robot to virtual agentswhich are essentially restricted to 
the keyboard and the screen ofa standard computer.  However, as a consequence 
the virtual agents loose the procedures for autonomous recognition and action, 
and are thus reduced to core value *place holders* which are understood by the 
human users, but not by the machine.  There are many applications for which 
virtual agents are sufficient.Also, they may make do with only the hear mode, 
leaving the thinkand the speak mode aside.  A general theory of how natural 
languageworks is nevertheless useful for such applications because it providesa 
framework which allows to make different applications compatiblewith each 
other.  Also, the framework may supply applications withoff-the-shelf 
components like automatic word form recognition,syntactic-semantic parsers, 
etc., in different languages, resultingin further standardization and 
interoperability.Happy Easter to you all!Roland

Date: Mon, 1 Apr 2013 02:35:44 -0700
Subject: Re: [agi] Steve's placement/payload theory of language
From: [email protected]
To: [email protected]

Anastasios,

On Sun, Mar 31, 2013 at 6:47 PM, Anastasios Tsiolakidis <[email protected]> 
wrote:


On Sun, Mar 31, 2013 at 10:55 PM, Steve Richfield <[email protected]> 
wrote:


Everyone in AGI seems to want to start at the front end (parsing) without 
knowing where they are going.

My point through the discussion you quoted from is that most people expect 
things from NL "understanding" that are completely unachievable. Sure, you can 
tease out a LOT of the sort of information you discuss below, but most of it 
would come with Bayesian probabilities that aren't much better than 50%, and it 
wasn't at all obvious what to do with such soft data.





It is difficult, for me at least, to follow these threads and make up my mind 
if you agree or disagree with each other, if you made up your own minds at 
least etc.


We have discussed a LOT of details, but I sense general agreement.
 


But Steve seems to include again and again some inaccuracies. Specifically, I 
am not ready to count even a single failure of NLP or AGI-NLP
I have avoided naming names, but the literature is FULL of NL parsing and 
"understanding" projects, many of which got to the point of demonstrating 
interesting things, but then they faded away, instead of being populated with 
rules and turned into products. After talking with some of these people, and 
then running into my own brick wall in DrEliza.com, I decided to find a better 
way.


 
since the systems I am familiar with have tried everything except the most 
obvious (and difficult): to model agents with a mix of intricate biased and 
unbiased world models and intentions. Language without a minimum of two mental 
worlds and one "objective" world is nothing but mad ramblings.


Perhaps, but does it make sense to parallel this process to tease out this 
information? The obvious answer is "yes", but there are a LOT of problems doing 
this in real time.



Similarly, several of the AGI builders of the day, myself included, started 
away from parsing and closer to either the mental worlds and/or the objective 
one(s), and Ben for example is not in a hurry to focus on the front-end. Shame 
on us I'd say, since after decades of publications on summarization, 
disambiguation etc it was a 17 year old who cashed in his summarization 
service. As Steven mentioned before, the world could be a different place if a 
few of us here had multimillion dollar liquidity.

Yea, either you guys will start converting your IP to cash, or forever remain 
closet AGI-seekers. AGI is WAY too big for any one person to ever build. It 
would be a challenge for one person just to build and maintain the parsing and 
disambiguation rules for everyday English, let alone all of the OTHER things 
you would have to do to build an AGI. Without cash, you will forever be wage 
slaves, while others build AGIs or whatever with your efforts.


Then again, Yahoo slapped us all in the face by withdrawing Summly, presumably 
suggesting we are a bunch of losers and can neither improve upon nor match 
Summly's achievements in reasonable time.


Is Summly's algorithm described somewhere?

Note a quirk of law: It is conceivable that Summly had adopted my algorithm but 
kept it proprietary. As such, Yahoo would have NO claim on the technology, and 
their work would NOT count as prior art. It happens all the time - people 
validly patent things that it turns out someone else has already developed. 
These patents are fully enforceable.


These questions will soon be answered for my invention, because my application 
has been "made special" (fast tracked).
 




Or can we?

Again, the challenge with AGI is a lack of anything resembling a spec. It is 
hard to design something to perform an undefined function.


However, my invention was NOT what to do, but how to do such things faster. The 
combinatorial explosion from failed tests hangs over the head of all NL 
"understanding" efforts. From what I can see, my method is the ONLY presently 
known way of prospectively running fast enough, once the rules/tables/DB are 
populated with all the information needed to process everyday English (or other 
natural language).


Steve






  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to