Mike, I am not sure why I want to discuss this with you. I have made a 
breakthrough in a simple computational method which is directly related to AGI. 
 However, it may not be enough to overcome the AGI complexity problem.  But, 
for just a moment, suppose that it did.  Suppose that I was able to create a 
working demo of an AGi program which showed some genuine ability to learn new 
things.  Would that mean that you were right and I was wrong?  If I expressed 
my viewpoint that I had found a way to overcome the complexity problem and if 
my method actually worked then wouldn't my interpretation of the situation must 
be closer to the truth than yours.  Yes this is just an imaginary situation but 
it is different than your imaginary situation. But the question is: If I was 
able to devise a system to overcome what I considered the complexity of AGI 
problem would that mean that:1. There never was a complexity problem.2. There 
was a complexity problem that needed to be overcome and once someone figured it 
out AGI became feasible.3. Your point of view was just as good as mine because 
we were both right in a way.4. Neither point of view was right.5. None of the 
above.6. All of the above.7. What ever I decide must be right and you can work 
it out from there. Jim Bromer
 From: [email protected]
To: [email protected]
Subject: Re: [agi] Re: Summary of My Current Theory For an AGI Program.
Date: Tue, 16 Apr 2013 17:17:22 +0100










It’s not prejudging. And it’s not particularly directed at you.
 
You are simply following an intellectually mad, widespread, GOFAI notion 
about the potential productivity of pure language/text analysis – a notion that 
has already demonstrably wasted God knows how many years of would-be AGI-ers’ 
lives. Look at the idiocy (and incorrigibility) of Lenat’s enterprise.
 
Similarly, you are following an equally old-fashioned and mad notion that 
the complexity which has bedevilled narrow AI, has something to do with AGI – 
of 
which you also cannot produce a single problem example. No examples, no 
evidence 
= mucho waste of life.


 

From: Jim Bromer 
Sent: Tuesday, April 16, 2013 3:22 PM
To: AGI 

Subject: RE: [agi] Re: Summary of My Current Theory For an AGI 
Program.
 

Mike, 
I am only replying now because I want to see if the 
formatting of Hotmail.com is compatible with listbox.  I would be happy to 
talk to you about this after I finish my summary if you could avoid prejudging 
what I might have to say.  This kind of remark, "Give one example of the 
kind of productive text analysis you (or anyone else) mean[s] – and you’ll find 
it is impossible and save yourself years of life," is really a blatant example 
of prejudging.  I feel that personal remarks interfere with what is being 
said even though they could be useful if used sparingly.  Prejudging what 
someone is going to say is a kind of personal remark. 




---------- Forwarded message ----------
From: Mike Tintner <[email protected]>
Date: 
Tue, Apr 16, 2013 at 9:38 AM
Subject: Re: [agi] Re: Summary of My Current 
Theory For an AGI Program.
To: AGI <[email protected]>






So we’re talking about text analysis?  (That didn’t hurt, did it  ? )
 
Give one example of the kind of productive text analysis you (or anyone 
else) mean[s] – and you’ll find it is impossible and save yourself years of 
life. And you could at least start a productive discussion here. [Note that 
Steve was just specific about his proposed project – and that produced a useful 
discussion].
 
Lots of people seem to have fantasies about a supposed AGI program that is 
going to become wise and ultimately rule the world through analysing the texts 
on the net. It’s total cobblers. As I’ve pointed out, there isn’t a program 
that 
can productively analyse the possible combinations of two or three words, let 
alone two sentences, let alone the contents of one or two texts. 
 
The fantasies are all Chinese room fantasies about how a manipulator of 
meaningless words enclosed in a black box can become supremely wise about the 
outside world, without ever venturing outside.  Fantasies of real world 
wisdom without real world experience.
 
That’s how science became so relatively wise about the world, right? – by 
scientists staying inside their studies and playing with words and logic? Or 
did 
Francis Bacon first have to smash that fantasy ?
 
 


 

From: Jim Bromer 
Sent: Tuesday, April 16, 2013 2:08 PM

To: AGI 
Subject: Re: [agi] Re: Summary of My Current Theory For an AGI 
Program.
 




On Mon, Apr 15, 2013 at 11:29 AM, Alan Grimes <[email protected]> wrote:


  Mike Tintner wrote: 
  What’s your O.D. ? What’s the end-product of your 
    program? Drawings? Buildings? Text-readings? Wtf is it going to DO? Or is 
    that too difficult for you to say?
... I'm getting sick 
  of these jags you go off on. Last week it was "Well your AI doesn't implement 
  true creativity; prove that it does!"

This week you are ignoring the G 
  in general AI. The word GENERAL in AI, like in computer science at large, 
  means "Virtually any" So it must be capable of dealing with virtually any 
  problem in virtually any domain using virtually any method. So therefore it 
  must be able to learn any abstraction less than equal some reasonable 
  complexity metric and it must have the computational capabilities to optimize 
  and apply those abstractions. 
  
  ...
 

Alan,
My text-based AGi program would be a limited kind of AGI program but it 
would be a proof-of-concept thing.  If it worked then it would be general 
enough to convert it for different kinds of IO actions.  A program that 
could do some genuine learning and derive abstractions from text would be 
flexible enough to modify for conversion to image AGI and so on.
Jim Bromer
 


  
  
    AGI | Archives  | Modify Your Subscription 
    


  
  
    AGI | Archives  | Modify Your Subscription 
    
 


  
  
    AGI | Archives  | Modify 
      Your Subscription 
    


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

<<inline: wlEmoticon-smile[1].png>>

Reply via email to