Colin appears to have clarified his position. It seems to be that computers cannot be intelligent, and we need some other kind of device for AGI, which he is working on.
That is a perfectly possible assertion and approach. Unfortunately, what Ben try to say as A is kind of an assumption for the list and any programmer working on AGI, so I'm not sure how valuable Colin will find this list. Also, from what I've seen, it's not a position that I think I've ever seen defended in any convincing way, and I kind of suspect it can't be. Indeed, it sets off my crank-alert. I will try to be as patient as ever I am, which really isn't much, but I just post this as a warning. I do have a positive contribution to make in this conversation, but this stream has been flowing a little quickly for me to jump in. Maybe a bit later. andi Colin posted: > Ben Goertzel wrote: >> >> I still don't really get it, sorry... ;-( >> >> Are you saying >> >> A) that a conscious, human-level AI **can** be implemented on an >> ordinary Turing machine, hooked up to a robot body >> >> or >> >> B) A is false >> > B) > > Yeah that about does it. > > Specifically: It will never produce an original scientific act on the > a-priori unknown. It is the "unknown" bit which is important. You can't > deliver a 'model' of the unknown that delivers all of the aspects of the > unknown without knowing it all already!....catch 22...you have to be > exposed /directly/ to all the actual novelty in the natural world, not > the novelty recognised by a model of what novelty is. Consciousness > (P-consciousness and specifically and importantly visual > P-consciousness) is the mechanism by which novelty in the actual DISTANT > natural world is made apparent to the agent. Symbolic grounding in > Qualia NOT I/O. You do not get that information through your retina > data. You get it from occipital visual P-consciousness. The Turing > machine abstracts the mechanism of access to the distal natural world > ....and hence has to be informed by a model, which you don't have... > > Because scientific behaviour is just a (formal, very testable) > refinement of everyday intelligent behaviour, everyday intelligent > behaviour of the kind humans have - goes down the drain with it. > > With the TM precluded from producing a scientist, it is precluded as a > mechanism for AGI. > > I like scientific behaviour. A great clarifier. > > cheers > colin > > > > > > > > > ------------------------------------------- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com > ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34 Powered by Listbox: http://www.listbox.com
