YOU said teaching an AGI was cheating.

YOU want others to talk straight to your points and issues but you make it
look like I don't think much of generalizing in an AGI design.  The fact is
that generalizing is at the heart of my design and is a totally different
issue from "training" being "cheating".

Ask the question simply:

Do the people on this list think that training is necessary for the creation
of an AGI and would they call training the AGI cheating?

You say training means cheating and to create an AGI you can't do that.  I
disagree.

David Clark

> -----Original Message-----
> From: Mike Tintner [mailto:[EMAIL PROTECTED]
> Sent: March-03-08 8:48 PM
> To: agi@v2.listbox.com
> Subject: Re: [agi] Thought experiment on informationally limited
> systems
> 
> Will:Is generalising a skill logically the first thing that you need to
> make an AGI? Nope, the means and sufficient architecture to acquire
> skills and competencies are more useful early on in an agi
> development
> 
> Ah, you see, that's where I absolutely disagree, and a good part of why
> I'm
> hammering on the way I am. I don't think many (anyone?) will agree with
> David, but many if not everyone will agree with you.
> 
> Yes, the problem of generalising is the very first thing you tackle,
> and
> should shape everything you do - at least once you have moved beyond
> idle
> thought to serious engagement.
> 
> If you're trying to develop a new electric battery, you look for that
> new
> chemical first (assuming that's what you reckon you'll need) - you
> don't
> start looking at the casing or other aspects of the battery. Anything
> peripheral you do first may be rendered totally irrelevant later on
> when you
> do discover that chemical and a total waste of time.
> 
> And such, I'm sure, is the case with AGI. That central problem of
> generalising demands a total new mentality - a sea-change of approach.
> 
> (You saw an example in my exchange with YKY. I think - in fact, I'm
> just
> about totally certain - that generalising demands a system of open-
> ended
> concepts like ours. Because he isn't directly concerned with the
> generalising problem, he wants a closed-ended, unambiguous language -
> which
> is in fact only suitable for narrow AI and, I would argue, a waste of
> time).
> 
> P.S. It's a bit sad - you started this thread with a generalising
> problem,
> now you're backtracking on it.
> 
> 
> -------------------------------------------
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> http://www.listbox.com/member/?&;
> 724342
> Powered by Listbox: http://www.listbox.com

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to