2008/10/20 Mike Tintner <[EMAIL PROTECTED]>:
> (There is a separate, philosophical discussion,  about feasibility in a
> different sense -  the lack of  a culture of feasibility, which is perhaps,
> subconsciously what Ben was also referring to  -  no one, but no one, in
> AGI, including Ben,  seems willing to expose their AGI ideas and proposals
> to any kind of feasibility discussion at all  -  i.e. how can this or that
> method solve any of the problem of general intelligence?

This is because you define GI to be totally about creativity, analogy
etc. Now that is part of GI, but no means all.  I'm a firm believer in
splitting tasks down and people specialising in those tasks, so I am
not worrying about creativity at the moment, apart from making sure
that any architecture I build doesn't constrain people working on it
with the types of creativity they can produce.

Many useful advances in computer technology (operating systems,
networks including the internet) have come about by not assuming too
much about what will be done with them. I think the first layer of a
GI system can be done the same way.

My self-selected speciality is resource allocation (RA). There are
some times when certain forms of creativity are not a good option,
e.g. flying a passenger jet. When shouldn't humans be creative? How
should creativity and X other systems be managed?

Looking at opencog the RA is not baked into the arch so I have doubts
about how well it would survive in its current state under recursive
self-change. It will probably be reasonable for what the opencog team
is doing at the moment, but getting low-level arch wrong or not fit
for the next stage is a good way to waste work.

 Will Pearson


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to