[agi] The AGI and the will to live

2003-01-09 Thread Colin Hales
to sleep. Whatever the outcome, at its root is the will to even start learning that outcome. You have to be awake to have a free will. What gets our AGI progeny up in the morning? regards, Colin Hales --- To unsubscribe, change your address, or temporarily deactivate your subscription

RE: [agi] Playing with fire

2003-03-03 Thread Colin Hales
of strategies. IMHO they all mean squat. I'm pretty sure we're going to have to face this thing full on and cop the consequences. I'm with Pei Wang. Let's explore and deal with it. cheers, Colin Hales --- To unsubscribe, change your address, or temporarily deactivate your subscription

RE: [agi] Playing with fire

2003-03-03 Thread Colin Hales
PhilipI personally think humans as a society are capable of saving themselves from their own individual and collective stupidity. I've worked explicitly on this issue for 30 years and still retain some optimism on the subject. Colin: I'm with Pei Wang. Let's explore and deal with it.OK, if

RE: [agi] BDI architecture

2003-03-04 Thread Colin Hales
Both Peter Wallis and Mike Georgeff have a history at Melbourne University. http://www.cs.mu.oz.au/agentlab/ and they and RMIT http://www.agents.org.au/collaborate a lot. There is a mailing list from which you may launch queries. They are an active group and quite approachable. I've

RE: [agi] Intelligence enhancement

2003-06-22 Thread Colin Hales
This be Snyder... http://www.centerforthemind.com/ Tread carefully. cheers, Col --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [agi] Testing, and a question....

2008-10-03 Thread Colin Hales
of these (any) is of interest?...I'm not sure of the kinds of things you folk want to hear about. All comments are appreciated. regards to all, Colin Hales --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member

Re: [agi] Testing, and a question....

2008-10-03 Thread Colin Hales
it was in Hong Kong. The last one I went to was Tucson. 2006. It was a hoot. I wonder if Dave Chalmers will do the 'end of consciousness' party and blues-slam. :-) We'll see. Consider me 'applied for' as a workshop. I'll do the applications ASAP. regards, Colin Hales Ben Goertzel wrote: In terms

Re: [agi] COMP = false

2008-10-03 Thread Colin Hales
-authority' ... I defer to the empirical reality of the situation and would prefer that it be left to justify itself. I did not make any of it up. I merely observed. . ...and so if you don't mind I'd rather leave the issue there. .. regards, Colin Hales Mike Tintner wrote: Colin: 1

Re: [agi] COMP = false

2008-10-04 Thread Colin Hales
sure that computers can implement consciousness. But I don't find your arguments sway me one way or the other. A brief reply follows. 2008/10/4 Colin Hales [EMAIL PROTECTED]: Next empirical fact: (v) When you create a turing-COMP substrate the interface with space is completely destroyed

Re: [agi] COMP = false

2008-10-05 Thread Colin Hales
this realisation wash over you. It's what I had to do. I used to think in COMP terms too. And have fun! This is supposed to be fun! cheers Colin Hales Ben Goertzel wrote: The argument seems wrong to me intuitively, but I'm hard-put to argue against it because the terms are so unclearly defined

Re: [agi] COMP = false

2008-10-05 Thread Colin Hales
in expectations of our skills as explorers of the natural world ...than it might appear. In being this way I hope to be part of the solution, not part of the problem. COMP being false make the AGI goal much harder...but much much more interesting! That's a little intro to colin hales for you. cheers

Re: [agi] COMP = false

2008-10-05 Thread Colin Hales
OK. Last one! Please replace 2) with: 2. Science says that the information from the retina is insufficient to construct a visual scene. Whether or not that 'constuct' arises from computation is a matter of semantics. I would say that it could be considered computation - natural computation by

Re: [agi] COMP = false

2008-10-06 Thread Colin Hales
Excellent. I want one! Maybe they should be on sale at the next conference...there's a marketing edge for ya. If I have to be as wrong as Vladimir says I'll need the right clothes. :-) cheers colin Ben Goertzel wrote: And you can't escape flaws in your reasoning by wearing a lab

OFFLIST [agi] Readings on evaluation of AGI systems

2008-10-07 Thread Colin Hales
Hi Ben, A good bunch of papers. (1) Hales, C. 'An empirical framework for objective testing for P-consciousness in an artificial agent', The Open Artificial Intelligence Journal vol.? , 2008. Apparently it has been accepted but I'll believe it when I see it. It's highly relevant to the forum

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Colin Hales
real AGI and be seen as real science. To do that this forum should attract cognitive scientists, psychologists, physicists, engineers, neuroscientists. Over time, maybe we can get that sort of diversity happening. I have enthusiasm for such things.. cheers colin hales

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Colin Hales
drift. cheers colin hales --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Colin Hales
for their output. I for one will try and help in that regard. Time will tell I suppose. cheers, colin hales Matt Mahoney wrote: --- On Mon, 10/13/08, Colin Hales [EMAIL PROTECTED] wrote: In the wider world of science it is the current state of play that the theoretical basis

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Colin Hales
cheers, colin hales --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34 Powered

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales
Ben Goertzel wrote: Hi, My main impression of the AGI-08 forum was one of over-dominance by singularity-obsessed and COMP thinking, which must have freaked me out a bit. This again is completely off-base ;-) I also found my feeling about -08 as slightly coloured by first

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales
Ben Goertzel wrote: OK, but you have not yet explained what your theory of consciousness is, nor what the physical mechanism nor role for consciousness that you propose is ... you've just alluded obscurely to these things. So it's hard to react except with raised eyebrows and skepticism!!

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales
communities you mention? I've looked briefly but in vain ... would appreciate any helpful pointers. Thanks, Terren --- On *Tue, 10/14/08, Colin Hales /[EMAIL PROTECTED]/* wrote: From: Colin Hales [EMAIL PROTECTED] Subject: Re: [agi] Advocacy Is no Excuse for Exaggeration To: agi@v2

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales
, Colin Hales --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34 Powered by Listbox

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales
Ben Goertzel wrote: Again, when you say that these neuroscience theories have squashed the computational theories of mind, it is not clear to me what you mean by the computational theories of mind. Do you have a more precise definition of what you mean? I suppose it's a bit ambiguous.

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales
Ben Goertzel wrote: Sure, I know Pylyshyn's work ... and I know very few contemporary AI scientists who adopt a strong symbol-manipulation-focused view of cognition like Fodor, Pylyshyn and so forth. That perspective is rather dated by now... But when you say Where computation is meant

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales
Ben Goertzel wrote: About self: you don't like Metzinger's neurophilosophy I presume? (Being No One is a masterwork in my view) I agree that integrative biology is the way to go for understanding brain function ... and I was talking to Walter Freeman about his work in the early 90's when

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales
Ben Goertzel wrote: I still don't really get it, sorry... ;-( Are you saying A) that a conscious, human-level AI **can** be implemented on an ordinary Turing machine, hooked up to a robot body or B) A is false B) Yeah that about does it. Specifically: It will never produce an

Re: COMP = false? (was Re: [agi] Advocacy Is no Excuse for Exaggeration)

2008-10-14 Thread Colin Hales
Matt Mahoney wrote: --- On Tue, 10/14/08, Colin Hales [EMAIL PROTECTED] wrote: The only reason for not connecting consciousness with AGI is a situation where one can see no mechanism or role for it. That inability is no proof there is noneand I have both to the point of having

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread Colin Hales
Hi Trent, You guys are forcing me to voice all sort of things in odd ways. It's a hoot...but I'm running out of hours!!! Trent Waddington wrote: On Wed, Oct 15, 2008 at 4:48 PM, Colin Hales [EMAIL PROTECTED] wrote: you have to be exposed directly to all the actual novelty in the natural

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread Colin Hales
Oops I forgot... Ben Goertzel wrote: About self: you don't like Metzinger's neurophilosophy I presume? (Being No One is a masterwork in my view) I got the book out and started to read it. But I found it incredibly dense and practically useless. It told me nothing. I came out the other

Re: [agi] Advocacy Is no Excuse for Consciousness

2008-10-15 Thread Colin Hales
John LaMuth wrote: Colin Consc. by nature is subjective ... Can never prove this in a machine -- or other human beings for that matter Yes you can. This is a fallacy. You can prove it in humans and you can prove it in a machine. You simply demand it do science. Not simple - but possible. I

Re: [agi] META: A possible re-focusing of this list

2008-10-16 Thread Colin Hales
Ben Goertzel wrote: Colin, There's a difference between 1) Discussing in detail how you're going to build a non-digital-computer based AGI 2) Presenting general, hand-wavy theoretical ideas as to why digital-computer-based AGI can't work I would be vastly more interested in 1 than 2 ...

Re: [agi] Machine Consciousness Workshop, Hong Kong, June 2009

2008-10-30 Thread Colin Hales
Hi, I was wondering as to the formatwho does what, how...speaking etc etc.. what sort of airing do the contributors get for their material? regards colin Ben Goertzel wrote: Hi all, I wanted to let you know that Gino Yu and I are co-organizing a Workshop on Machine Consciousness,

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-10 Thread Colin Hales
Matt Mahoney wrote: --- On Mon, 11/10/08, Richard Loosemore [EMAIL PROTECTED] wrote: Do you agree that there is no test to distinguish a conscious human from a philosophical zombie, thus no way to establish whether zombies exist? Disagree. What test would you use? The

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Colin Hales
[EMAIL PROTECTED] wrote: When people discuss the ethics of the treatment of artificial intelligent agents, it's almost always with the presumption that the key issue is the subjective level of suffering of the agent. This isn't the only possible consideration. One other consideration is our

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Colin Hales
according to them are discovered, not defined. Humans did not wait for a definition of fire before cooking dinner with it. Why should consciousness be any different? cheers colin hales --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Colin Hales
Matt Mahoney wrote: --- On Wed, 11/12/08, Colin Hales [EMAIL PROTECTED] wrote: It is difficult but you can test for it objectively by demanding that an entity based on your 'theory of consciousness' deliver an authentic scientific act on the a-priori unknown using visual experience

[agi] test

2008-11-13 Thread Colin Hales
--- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox:

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-13 Thread Colin Hales
, and the rot sets in. The plus side - you get to be 100% right. Personally I'd rather get real AGI built and be testably wrong a million times along the way. cheers, colin hales Matt Mahoney wrote: --- On Wed, 11/12/08, Harry Chesley [EMAIL PROTECTED] wrote: Matt Mahoney wrote: If you

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Colin Hales
Richard Loosemore wrote: Colin Hales wrote: Dear Richard, I have an issue with the 'falsifiable predictions' being used as evidence of your theory. The problem is that right or wrong...I have a working physical model for consciousness. Predictions 1-3 are something that my hardware can

[agi] Now hear this: Human qualia are generated in the human cranial CNS and no place else

2008-11-17 Thread Colin Hales
scientists, for 150 years.Can I consider this a general broadcast once and for all? I don't ever want to have to pump this out again. Life is too short. regards, Colin Hales --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Colin Hales
Richard Loosemore wrote: Colin Hales wrote: Richard Loosemore wrote: Colin Hales wrote: Dear Richard, I have an issue with the 'falsifiable predictions' being used as evidence of your theory. The problem is that right or wrong...I have a working physical model for consciousness

Re: [agi] Now hear this: Human qualia are generated in the human cranial CNS and no place else

2008-11-18 Thread Colin Hales
Trent Waddington wrote: On Tue, Nov 18, 2008 at 4:07 PM, Colin Hales [EMAIL PROTECTED] wrote: I'd like to dispel all such delusion in this place so that neurally inspired AGI gets discussed accurately, even if your intent is to explain P-consciousness away... know exactly what you

Re: [agi] Machine Knowledge and Inverse Machine Knowledge...

2008-12-09 Thread Colin Hales
And 'deep blue' knows nothing about chess. These machines are manipulating abstract symbols at the speed of light. The appearance of 'knowledge' of the natural world in the sense that humans know things, must be absent and merely projected by us as observers, because we are really

Re: [agi] Should I get a PhD?

2008-12-17 Thread Colin Hales
Hi, I went through this exact process of vacillation in 2003. I have a purely entrepreneurial outcome in mind, but found I needed to have folk listen to me. In order that some comfort be taken (by those with $$$) in my ideas, I found, to my chagrin...that having a 'license to think = PhD' (as

Re: [agi] Building a machine that can learn from experience

2008-12-18 Thread Colin Hales
Steve Richfield wrote: Richard, On 12/18/08, *Richard Loosemore* r...@lightlink.com mailto:r...@lightlink.com wrote: Rafael C.P. wrote: Cognitive computing: Building a machine that can learn from experience http://www.physorg.com/news148754667.html

Re: [agi] Building a machine that can learn from experience

2008-12-18 Thread Colin Hales
YKY (Yan King Yin) wrote: DARPA buys G.Tononi for 4.9 $Million! For what amounts to little more than vague hopes that any of us here could have dreamed up. Here I am, up to my armpits in an actual working proposition with a real science basis... scrounging for pennies. hmmm...maybe if I

Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Colin Hales
J. Andrew Rogers wrote: On Dec 18, 2008, at 10:09 PM, Colin Hales wrote: I think I covered this in a post a while back but FYI... I am a little 'left-field' in the AGI circuit in that my approach involves literal replication of the electromagnetic field structure of brain material

Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Colin Hales
breakfast. Only 5 to go. cheers colin hales --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret

Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Colin Hales
J. Andrew Rogers wrote: On Dec 19, 2008, at 12:13 PM, Colin Hales wrote: The answer to this is that you can implement it in software. But you won't do that because the result is not an AGI, but an actor with a script. I actually started AGI believing that software would do it. When I got

Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Colin Hales
Ben Goertzel wrote: Goodness. I have to tell you, Colin, your style of discourse just SOUNDS so insane and off-base, it requires constant self-control on my part to look past that and focus on any interesting ideas that may exist amidst all the peculiarity!! And if **I** react that way,

Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Colin Hales
. Occam's razor prevents me from taking that position. So the argument cuts both ways! 1+1=FROG. On the planet Blortlpoose the Prolog language does nothing but construct cakes. :-) This algorithmic nonsense was brought to you by the natural brain electrodynamics of Colin Hales' brain. and ALL

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Colin Hales
used too many damned words! I expect we'll just have to agree to disagree... but there you have it :-) colin hales (1) Edelman, G. (2003). Naturalizing consciousness: A theoretical framework. Proc Natl Acad Sci U S A, 100(9), 5520--24. Ed Porter wrote: Colin, From a quick read, the gist

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Colin Hales
Ed, Comments interspersed below: Ed Porter wrote: Colin, Here are my comments re the following parts of your below post: ===Colin said== I merely point out that there are fundamental limits as to how computer science (CS) can inform/validate basic/physical science - (in

Re: [agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-18 Thread Colin Hales
Try this one ... http://www.bentham.org/open/toaij/openaccess2.htm If the test subject can be a scientist, it is an AGI. cheers colin Steve Richfield wrote: Deepak, An intermediate step is the reverse Turing test (RTT), wherein people or teams of people attempt to emulate an AGI. I suspect