to sleep.
Whatever the outcome, at its root is the will to even start learning that
outcome. You have to be awake to have a free will.
What gets our AGI progeny up in the morning?
regards,
Colin Hales
---
To unsubscribe, change your address, or temporarily deactivate your subscription
of strategies. IMHO they all mean squat. I'm pretty sure
we're going to have to face this thing full on and cop the consequences.
I'm with Pei Wang. Let's explore and deal with it.
cheers,
Colin Hales
---
To unsubscribe, change your address, or temporarily deactivate your subscription
PhilipI personally think humans as a society are
capable of saving themselves from their own individual and collective
stupidity. I've worked explicitly on this issue for 30 years and still
retain some optimism on the subject. Colin: I'm with Pei
Wang. Let's explore and deal with it.OK, if
Both
Peter Wallis and Mike Georgeff have a history at Melbourne University.
http://www.cs.mu.oz.au/agentlab/
and they and RMIT http://www.agents.org.au/collaborate
a lot.
There
is a mailing list from which you may launch queries.
They
are an active group and quite approachable.
I've
This be Snyder...
http://www.centerforthemind.com/
Tread carefully.
cheers,
Col
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
of these (any) is
of interest?...I'm not sure of the kinds of things you folk want to hear
about. All comments are appreciated.
regards to all,
Colin Hales
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member
it was in Hong
Kong. The last one I went to was Tucson. 2006. It was a hoot. I wonder
if Dave Chalmers will do the 'end of consciousness' party and
blues-slam. :-) We'll see. Consider me 'applied for' as a workshop. I'll
do the applications ASAP.
regards,
Colin Hales
Ben Goertzel wrote:
In terms
-authority' ... I defer to the empirical reality of the
situation and would prefer that it be left to justify itself. I did not
make any of it up. I merely observed. . ...and so if you don't mind I'd
rather leave the issue there. ..
regards,
Colin Hales
Mike Tintner wrote:
Colin:
1
sure that computers can implement consciousness. But
I don't find your arguments sway me one way or the other. A brief
reply follows.
2008/10/4 Colin Hales [EMAIL PROTECTED]:
Next empirical fact:
(v) When you create a turing-COMP substrate the interface with space is
completely destroyed
this realisation wash
over you. It's what I had to do. I used to think in COMP terms too. And
have fun! This is supposed to be fun!
cheers
Colin Hales
Ben Goertzel wrote:
The argument seems wrong to me intuitively, but I'm hard-put to argue
against it because the terms are so unclearly defined
in
expectations of our skills as explorers of the natural world ...than it
might appear. In being this way I hope to be part of the solution, not
part of the problem.
COMP being false make the AGI goal much harder...but much much more
interesting!
That's a little intro to colin hales for you.
cheers
OK. Last one!
Please replace 2) with:
2. Science says that the information from the retina is insufficient
to construct a visual scene.
Whether or not that 'constuct' arises from computation is a matter of
semantics. I would say that it could be considered computation - natural
computation by
Excellent. I want one! Maybe they should be on sale at the next
conference...there's a marketing edge for ya.
If I have to be as wrong as Vladimir says I'll need the right clothes.
:-)
cheers
colin
Ben Goertzel wrote:
And you
can't escape flaws in your reasoning by wearing a lab
Hi Ben,
A good bunch of papers.
(1) Hales, C. 'An empirical framework for objective testing for
P-consciousness in an artificial agent', The Open Artificial
Intelligence Journal vol.? , 2008.
Apparently it has been accepted but I'll believe it when I see it.
It's highly relevant to the forum
real AGI and be seen as real science. To do that
this forum should attract cognitive scientists, psychologists,
physicists, engineers, neuroscientists. Over time, maybe we can get that
sort of diversity happening. I have enthusiasm for such things..
cheers
colin hales
drift.
cheers
colin hales
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
for their output. I for one will try
and help in that regard. Time will tell I suppose.
cheers,
colin hales
Matt Mahoney wrote:
--- On Mon, 10/13/08, Colin Hales [EMAIL PROTECTED] wrote:
In the wider world of science it is the current state of play that the
theoretical basis
cheers,
colin hales
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered
Ben Goertzel wrote:
Hi,
My main impression of the AGI-08 forum was one of over-dominance
by singularity-obsessed and COMP thinking, which must have
freaked me out a bit.
This again is completely off-base ;-)
I also found my feeling about -08 as slightly coloured by first
Ben Goertzel wrote:
OK, but you have not yet explained what your theory of consciousness
is, nor what the physical mechanism nor role for consciousness that
you propose is ... you've just alluded obscurely to these things. So
it's hard to react except with raised eyebrows and skepticism!!
communities you mention? I've looked briefly but in vain ...
would appreciate any helpful pointers.
Thanks,
Terren
--- On *Tue, 10/14/08, Colin Hales /[EMAIL PROTECTED]/*
wrote:
From: Colin Hales [EMAIL PROTECTED]
Subject: Re: [agi] Advocacy Is no Excuse for Exaggeration
To: agi@v2
,
Colin Hales
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox
Ben Goertzel wrote:
Again, when you say that these neuroscience theories have squashed
the computational theories of mind, it is not clear to me what you
mean by the computational theories of mind. Do you have a more
precise definition of what you mean?
I suppose it's a bit ambiguous.
Ben Goertzel wrote:
Sure, I know Pylyshyn's work ... and I know very few contemporary AI
scientists who adopt a strong symbol-manipulation-focused view of
cognition like Fodor, Pylyshyn and so forth. That perspective is
rather dated by now...
But when you say
Where computation is meant
Ben Goertzel wrote:
About self: you don't like Metzinger's neurophilosophy I presume?
(Being No One is a masterwork in my view)
I agree that integrative biology is the way to go for understanding
brain function ... and I was talking to Walter Freeman about his work
in the early 90's when
Ben Goertzel wrote:
I still don't really get it, sorry... ;-(
Are you saying
A) that a conscious, human-level AI **can** be implemented on an
ordinary Turing machine, hooked up to a robot body
or
B) A is false
B)
Yeah that about does it.
Specifically: It will never produce an
Matt Mahoney wrote:
--- On Tue, 10/14/08, Colin Hales [EMAIL PROTECTED] wrote:
The only reason for not connecting consciousness with AGI is a
situation where one can see no mechanism or role for it. That inability
is no proof there is noneand I have both to the point of having
Hi Trent,
You guys are forcing me to voice all sort of things in odd ways.
It's a hoot...but I'm running out of hours!!!
Trent Waddington wrote:
On Wed, Oct 15, 2008 at 4:48 PM, Colin Hales
[EMAIL PROTECTED] wrote:
you have to be exposed directly to all the actual novelty in the natural
Oops I forgot...
Ben Goertzel wrote:
About self: you don't like Metzinger's neurophilosophy I presume?
(Being No One is a masterwork in my view)
I got the book out and started to read it. But I found it incredibly
dense and practically useless. It told me nothing. I came out the other
John LaMuth wrote:
Colin
Consc. by nature is subjective ...
Can never prove this in a machine -- or other human beings for that
matter
Yes you can. This is a fallacy. You can prove it in humans and you can
prove it in a machine.
You simply demand it do science. Not simple - but possible. I
Ben Goertzel wrote:
Colin,
There's a difference between
1)
Discussing in detail how you're going to build a non-digital-computer
based AGI
2)
Presenting general, hand-wavy theoretical ideas as to why
digital-computer-based AGI can't work
I would be vastly more interested in 1 than 2 ...
Hi,
I was wondering as to the formatwho does what, how...speaking etc
etc.. what sort of airing do the contributors get for their material?
regards
colin
Ben Goertzel wrote:
Hi all,
I wanted to let you know that Gino Yu and I are co-organizing a
Workshop on Machine
Consciousness,
Matt Mahoney wrote:
--- On Mon, 11/10/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Do you agree that there is no test to distinguish a
conscious human from a philosophical zombie, thus no way to
establish whether zombies exist?
Disagree.
What test would you use?
The
[EMAIL PROTECTED] wrote:
When people discuss the ethics of the treatment of artificial intelligent
agents, it's almost always with the presumption that the key issue is the
subjective level of suffering of the agent. This isn't the only possible
consideration.
One other consideration is our
according to them
are discovered, not defined. Humans did not wait for a definition of
fire before cooking dinner with it. Why should consciousness be any
different?
cheers
colin hales
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed
Matt Mahoney wrote:
--- On Wed, 11/12/08, Colin Hales [EMAIL PROTECTED] wrote:
It is difficult but you can test for it objectively by
demanding that an entity based on your 'theory of
consciousness' deliver an authentic scientific act on
the a-priori unknown using visual experience
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox:
, and the rot sets in. The plus side - you
get to be 100% right. Personally I'd rather get real AGI built and be
testably wrong a million times along the way.
cheers,
colin hales
Matt Mahoney wrote:
--- On Wed, 11/12/08, Harry Chesley [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
If you
Richard Loosemore wrote:
Colin Hales wrote:
Dear Richard,
I have an issue with the 'falsifiable predictions' being used as
evidence of your theory.
The problem is that right or wrong...I have a working physical model
for consciousness. Predictions 1-3 are something that my hardware can
scientists, for 150 years.Can I consider this a general broadcast once
and for all? I don't ever want to have to pump this out again. Life is
too short.
regards,
Colin Hales
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
Richard Loosemore wrote:
Colin Hales wrote:
Richard Loosemore wrote:
Colin Hales wrote:
Dear Richard,
I have an issue with the 'falsifiable predictions' being used as
evidence of your theory.
The problem is that right or wrong...I have a working physical
model for consciousness
Trent Waddington wrote:
On Tue, Nov 18, 2008 at 4:07 PM, Colin Hales
[EMAIL PROTECTED] wrote:
I'd like to dispel all such delusion in this place so that neurally inspired
AGI gets discussed accurately, even if your intent is to explain
P-consciousness away... know exactly what you
And 'deep blue' knows nothing about chess.
These machines are manipulating abstract symbols at the speed of light.
The appearance of 'knowledge' of the natural world in the sense that
humans know things, must be absent and merely projected by us as
observers, because we are really
Hi,
I went through this exact process of vacillation in 2003.
I have a purely entrepreneurial outcome in mind, but found I needed to
have folk listen to me. In order that some comfort be taken (by those
with $$$) in my ideas, I found, to my chagrin...that having a 'license
to think = PhD' (as
Steve Richfield wrote:
Richard,
On 12/18/08, *Richard Loosemore* r...@lightlink.com
mailto:r...@lightlink.com wrote:
Rafael C.P. wrote:
Cognitive computing: Building a machine that can learn from
experience
http://www.physorg.com/news148754667.html
YKY (Yan King Yin) wrote:
DARPA buys G.Tononi for 4.9 $Million! For what amounts to little more
than vague hopes that any of us here could have dreamed up. Here I am, up to
my armpits in an actual working proposition with a real science basis...
scrounging for pennies. hmmm...maybe if I
J. Andrew Rogers wrote:
On Dec 18, 2008, at 10:09 PM, Colin Hales wrote:
I think I covered this in a post a while back but FYI... I am a
little 'left-field' in the AGI circuit in that my approach involves
literal replication of the electromagnetic field structure of brain
material
breakfast. Only 5 to go.
cheers
colin hales
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret
J. Andrew Rogers wrote:
On Dec 19, 2008, at 12:13 PM, Colin Hales wrote:
The answer to this is that you can implement it in software. But you
won't do that because the result is not an AGI, but an actor with a
script. I actually started AGI believing that software would do it. When
I got
Ben Goertzel wrote:
Goodness. I have to tell you, Colin, your style of discourse just
SOUNDS so insane and off-base, it requires constant self-control on my
part to look past that and focus on any interesting ideas that may
exist amidst all the peculiarity!!
And if **I** react that way,
. Occam's razor prevents me from taking that
position.
So the argument cuts both ways!
1+1=FROG.
On the planet Blortlpoose the Prolog language does nothing but construct
cakes. :-)
This algorithmic nonsense was brought to you by the natural brain
electrodynamics of Colin Hales' brain.
and ALL
used too many damned words!
I expect we'll just have to agree to disagree... but there you have it :-)
colin hales
(1) Edelman, G. (2003). Naturalizing consciousness: A theoretical
framework. Proc Natl Acad Sci U S A, 100(9), 5520--24.
Ed Porter wrote:
Colin,
From a quick read, the gist
Ed,
Comments interspersed below:
Ed Porter wrote:
Colin,
Here are my comments re the following parts of your below post:
===Colin said==
I merely point out that there are fundamental limits as to how
computer science (CS) can inform/validate basic/physical science - (in
Try this one ...
http://www.bentham.org/open/toaij/openaccess2.htm
If the test subject can be a scientist, it is an AGI.
cheers
colin
Steve Richfield wrote:
Deepak,
An intermediate step is the reverse Turing test (RTT), wherein
people or teams of people attempt to emulate an AGI. I suspect
54 matches
Mail list logo