I agree with Piaget.... this is more like a brain dump, not a bad
brain dump, I mean, it's fine, still a brain dump, hard to get to the
meat of it.

On 4/16/13, Mike Tintner <[email protected]> wrote:
> It’s not prejudging. And it’s not particularly directed at you.
>
> You are simply following an intellectually mad, widespread, GOFAI notion
> about the potential productivity of pure language/text analysis – a notion
> that has already demonstrably wasted God knows how many years of would-be
> AGI-ers’ lives. Look at the idiocy (and incorrigibility) of Lenat’s
> enterprise.
>
> Similarly, you are following an equally old-fashioned and mad notion that
> the complexity which has bedevilled narrow AI, has something to do with AGI
> – of which you also cannot produce a single problem example. No examples, no
> evidence = mucho waste of life.
>
> From: Jim Bromer
> Sent: Tuesday, April 16, 2013 3:22 PM
> To: AGI
> Subject: RE: [agi] Re: Summary of My Current Theory For an AGI Program.
>
> Mike,
> I am only replying now because I want to see if the formatting of
> Hotmail.com is compatible with listbox.  I would be happy to talk to you
> about this after I finish my summary if you could avoid prejudging what I
> might have to say.  This kind of remark, "Give one example of the kind of
> productive text analysis you (or anyone else) mean[s] – and you’ll find it
> is impossible and save yourself years of life," is really a blatant example
> of prejudging.  I feel that personal remarks interfere with what is being
> said even though they could be useful if used sparingly.  Prejudging what
> someone is going to say is a kind of personal remark.
>
>
> ---------- Forwarded message ----------
> From: Mike Tintner <[email protected]>
> Date: Tue, Apr 16, 2013 at 9:38 AM
> Subject: Re: [agi] Re: Summary of My Current Theory For an AGI Program.
> To: AGI <[email protected]>
>
>
>
> So we’re talking about text analysis?  (That didn’t hurt, did it  ? )
>
> Give one example of the kind of productive text analysis you (or anyone
> else) mean[s] – and you’ll find it is impossible and save yourself years of
> life. And you could at least start a productive discussion here. [Note that
> Steve was just specific about his proposed project – and that produced a
> useful discussion].
>
> Lots of people seem to have fantasies about a supposed AGI program that is
> going to become wise and ultimately rule the world through analysing the
> texts on the net. It’s total cobblers. As I’ve pointed out, there isn’t a
> program that can productively analyse the possible combinations of two or
> three words, let alone two sentences, let alone the contents of one or two
> texts.
>
> The fantasies are all Chinese room fantasies about how a manipulator of
> meaningless words enclosed in a black box can become supremely wise about
> the outside world, without ever venturing outside.  Fantasies of real world
> wisdom without real world experience.
>
> That’s how science became so relatively wise about the world, right? – by
> scientists staying inside their studies and playing with words and logic? Or
> did Francis Bacon first have to smash that fantasy ?
>
>
>
> From: Jim Bromer
> Sent: Tuesday, April 16, 2013 2:08 PM
> To: AGI
> Subject: Re: [agi] Re: Summary of My Current Theory For an AGI Program.
>
> On Mon, Apr 15, 2013 at 11:29 AM, Alan Grimes <[email protected]> wrote:
>
>   Mike Tintner wrote:
>     What’s your O.D. ? What’s the end-product of your program? Drawings?
> Buildings? Text-readings? Wtf is it going to DO? Or is that too difficult
> for you to say?
>
>   ... I'm getting sick of these jags you go off on. Last week it was "Well
> your AI doesn't implement true creativity; prove that it does!"
>
>   This week you are ignoring the G in general AI. The word GENERAL in AI,
> like in computer science at large, means "Virtually any" So it must be
> capable of dealing with virtually any problem in virtually any domain using
> virtually any method. So therefore it must be able to learn any abstraction
> less than equal some reasonable complexity metric and it must have the
> computational capabilities to optimize and apply those abstractions.
>   ...
>
>
>
> Alan,
> My text-based AGi program would be a limited kind of AGI program but it
> would be a proof-of-concept thing.  If it worked then it would be general
> enough to convert it for different kinds of IO actions.  A program that
> could do some genuine learning and derive abstractions from text would be
> flexible enough to modify for conversion to image AGI and so on.
> Jim Bromer
>
>       AGI | Archives  | Modify Your Subscription
>
>       AGI | Archives  | Modify Your Subscription
>
>
>       AGI | Archives  | Modify Your Subscription
>
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to