Hi Matt,
... The Gamez paper situation is now...erm...resolved. You are right: the paper doesn't argue that solving consciousness is necessary for AGI. What has happened recently is a subtle shift - those involved simple fail to make claims about the consciousness or otherwise of the machines! This does not entail that they are not actually working on it. They are just being cautious...Also, you correctly observe that solving AGI on a purely computational basis is not prohibited by the workers involved in the GAMEZ paper.. indeed most of their work assumes it!... I don't have a problem with this...However...'attributing' consciousness to it based on its behavior is probably about as unscientific as it gets. That outcome betrays no understanding whatever of consciousness, its mechanism or its role....and merely assumes COMP is true and creates an agreement based on ignorance. This is fatally flawed non-science.

[BTW: We need an objective test (I have one - I am waiting for it to get published...). I'm going to try and see where it's at in that process. If my test is acceptable then I predict all COMP entrants will fail, but I'll accept whatever happens... - and external behaviour is decisive. Bear with me a while till I get it sorted.]

I am still getting to know the folks [EMAIL PROTECTED] And the group may be diverse, as you say ... but if they are all COMP, then that diversity is like a group dedicated to an unresolved argument over the colour of a fish's bicycle. If we can attract the attention of the likes of those in the GAMEZ paper... and others such as Hynna and Boahen at Stanford, who have an unusual hardware neural architecture...(Hynna, K. M. and Boahen, K. 'Thermodynamically equivalent silicon models of voltage-dependent ion channels', /Neural Computation/ vol. 19, no. 2, 2007. 327-350.)...and others ... then things will be diverse and authoritative. In particular, those who have recently essentially squashed the computational theories of mind from a neuroscience perspective- the 'integrative neuroscientists':

Poznanski, R. R., Biophysical neural networks : foundations of integrative neuroscience, Mary Ann Liebert, Larchmont, NY, 2001, pp. viii, 503 p.

Pomerantz, J. R., Topics in integrative neuroscience : from cells to cognition, Cambridge University Press, Cambridge, UK ; New York, 2008, pp. xix, 427 p.

Gordon, E., Ed. (2000). Integrative neuroscience : bringing together biological, psychological and clinical models of the human brain. Amsterdam, Harwood Academic.

The only working, known model of general intelligence is the human. If we base AGI on anything that fails to account scientifically and completely for /all/ aspects of human cognition, including consciousness, then we open ourselves to critical inferiority... and the rest of science will simply find the group an irrelevant cultish backwater. Strategically the group would do well to make choices that attract the attention of the 'machine consciousness' crowd - they are directly linked to neuroscience via cog sci. The crowd that runs with JETAI (journal of theoretical and experimental artificial intelligence) is also another relevant one. It'd be nice if those people also saw the AGI journal as a viable repository for their output. I for one will try and help in that regard. Time will tell I suppose.

cheers,
colin hales


Matt Mahoney wrote:
--- On Mon, 10/13/08, Colin Hales <[EMAIL PROTECTED]> wrote:

In the wider world of science it is the current state of play that the
theoretical basis for real AGI is an open and multi-disciplinary
question.  A forum that purports to be invested in achievement of real
AGI as a target, one would expect that forum to a multidisciplianry
approach on many fronts, all competing scientifically for access to
real AGI.
I think this group is pretty diverse. No two people here can agree on how to 
build AGI.

Gamez, D. 'Progress in machine consciousness', Consciousness and
Cognition vol. 17, no. 3, 2008. 887-910.

$31.50 from Science Direct. I could not find a free version. I don't understand 
why an author would not at least post their published papers on their personal 
website. It greatly increases the chance that their paper is cited. I 
understand some publications require you to give up your copyright including 
your right to post your own paper. I refuse to publish with them.

(I don't know the copyright policy for Science Direct, but they are really milking the 
"publish or perish" mentality of academia. Apparently you pay to publish with 
them, and then they sell your paper).

In any case, I understand you have a pending paper on machine consciousness. 
Perhaps you could make it available. I don't believe that consciousness is 
relevant to intelligence, but that the appearance of consciousness is. Perhaps 
you can refute my position.

-- Matt Mahoney, [EMAIL PROTECTED]




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to