Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread Trent Waddington
On Wed, Oct 15, 2008 at 4:48 PM, Colin Hales
[EMAIL PROTECTED] wrote:
 you have to be exposed directly to all the actual novelty in the natural 
 world, not the novelty
 recognised by a model of what novelty is. Consciousness (P-consciousness and
 specifically and importantly visual P-consciousness) is the mechanism by
 which novelty in the actual DISTANT natural world is made apparent to the
 agent. Symbolic grounding in Qualia NOT I/O. You do not get that information
 through your retina data. You get it from occipital visual P-consciousness.
 The Turing machine abstracts the mechanism of access to the distal natural
 world and hence has to be informed by a model, which you don't have...

Wow.  I know I don't know what P-consciousness is.. and clearly I
must not no what Qualia is.. The capital must change the meaning
from the normal definition.

But basically I think you have to come out right now and say what your
philosophy of reality is.

If your complaint is that a robot senses are not as rich or as complex
as a human senses and therefore an AI hooked up to robot senses cannot
possibly have the same qualia as humans then can you *stipulate for
the sake of argument* that it may be possible to supply human senses
to an AI so that it does have the same qualia?  Or are you saying that
there's some mystical magical thing about humans that makes it
impossible for an AI to have the same qualia.

And if you're not happy with the idea of an AI having the same qualia
as humans, then surely you're willing to agree that a human that was
born wired into solely robot senses (suppose its for humanitarian
reasons, rather than just nazi doctors having fun if you like) would
have fundamentally different qualia.  You believe this human would not
produce an original scientific act on the a-priori unknown -
whatever that means - or does the fact that this evil human-robot
hybrid is somehow half human give it a personal blessing from God?

Trent

- suggesting that maybe this list is still not flammable enough.
-- and maybe there's a point where philosophical argument descends
into incoherent babble with no chance of ever developing into a-priori
unknown original scientific truths.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread Colin Hales

Hi Trent,
You guys are forcing me to voice all sort of things in odd ways.
It's a hoot...but I'm running out of hours!!!


Trent Waddington wrote:

On Wed, Oct 15, 2008 at 4:48 PM, Colin Hales
[EMAIL PROTECTED] wrote:
  

you have to be exposed directly to all the actual novelty in the natural world, 
not the novelty
recognised by a model of what novelty is. Consciousness (P-consciousness and
specifically and importantly visual P-consciousness) is the mechanism by
which novelty in the actual DISTANT natural world is made apparent to the
agent. Symbolic grounding in Qualia NOT I/O. You do not get that information
through your retina data. You get it from occipital visual P-consciousness.
The Turing machine abstracts the mechanism of access to the distal natural
world and hence has to be informed by a model, which you don't have...



Wow.  I know I don't know what P-consciousness is.. and clearly I
must not no what Qualia is.. The capital must change the meaning
from the normal definition.

But basically I think you have to come out right now and say what your
philosophy of reality is.
  
Let me say right away that if you don't know what qualia or 
P-consciousness are then you're missing 150 years of biology and things 
are gonna look kind of odd. I suggest a rapid wiki-googling  exercise 
(also in a recent post I delivered a whole pile of definitions and 
references.)


I don't have a philosophy of reality. I exist, at a practical; level, 
within the confines of the standard particle model, 4 forces, 4 
transmitter and the particle zoo. I don't need anything else to make a 
cogent case for my model that stacks up empirically the normal way.


I do have a need to alter science, however, to become a dual aspect 
epistemology about a  monism, entirely consistent with all existing 
science. Only the options of scientists changes and the structure of 
knowledge changes. In that case, the objective view I use has a very 
simple extension which accounts for subjectivity with physical, causal 
teeth.





If your complaint is that a robot senses are not as rich or as complex
as a human senses and therefore an AI hooked up to robot senses cannot
possibly have the same qualia as humans then can you *stipulate for
the sake of argument* that it may be possible to supply human senses
to an AI so that it does have the same qualia?  Or are you saying that
there's some mystical magical thing about humans that makes it
impossible for an AI to have the same qualia.

And if you're not happy with the idea of an AI having the same qualia
as humans, then surely you're willing to agree that a human that was
born wired into solely robot senses (suppose its for humanitarian
reasons, rather than just nazi doctors having fun if you like) would
have fundamentally different qualia.  You believe this human would not
produce an original scientific act on the a-priori unknown -
whatever that means - or does the fact that this evil human-robot
hybrid is somehow half human give it a personal blessing from God?

Trent

  
I'm not complaining about anything! I am dealing with brute reality. 
You are simply unaware of the job that AGI faces...and are not aware of 
the 150 years of physiological evidence that the periphery (peripheral 
nervous system and periphery of the central nervous system like retina) 
is not 'experienced'. None of it. I have already been through this in my 
original posting, I think. IO signals (human and robot)  _are not 
perceived_, generate no sensations  i.e. are  Qualia-Null. Experience 
happens in the cranial central nervous system, and is merely projected 
as-if it comes from the periphery. If feels like you have vision 
centered on your eyes, yes? Well surprise..all an illusion. Vision 
happens in the back of your head and is projected to appear as if your 
eyes generated it. You need to get a hold of some basic physiology.


So the surprise for everyone who's been operating under the assumption 
that symbol grounding is simply I/O wiring: WRONG. We are symbolically 
grounded in qualia: something that happens in the cranial CNS.  Not even 
the spinal CNS does any sensations. Pain in your back anyone? WRONG. The 
pain comes from cortex, NOT your spine. It's projected and mostly badly.


As you must know from my postings...qualia are absolutely mandatory for 
handling novelty for a whole pile of complex reasons. And robots will 
need them too. But they will not have them from simply wiring up I/O 
signals and manipulating abstractions. You need the equivalent of the 
complete CRANIAL central nervous system electrodynamics to achieve that, 
not a model of it.


So I demand that robots have qualia. For good physical, sensible, 
verifiable reasons...Whether they are exactly like humans..is another 
question. A human with artificial but equivalent peripheral sensory 
transduction would have qualia because the CNS generates them, not 
because are delivered by the I/O. And that human would be able to do 

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread Colin Hales

Oops I forgot...

Ben Goertzel wrote:


About self: you don't like Metzinger's neurophilosophy I presume?  
(Being No One is a masterwork in my view)




I got the book out and started to read it. But I found it incredibly 
dense and practically  useless. It told me nothing. I came out the other 
end with no clarity whatever. Just a whole pile of self-referential 
jargon I couldn't build. No information gained. Maybe in time it'll 
become more meaningful and useful. It changed nothing. I expected a 
whole lot more.



It was kind of like Finnegan's Wake. You can read it or have a gin and 
tonic and hit yourself over the head with it. The result is pretty much 
the same.


:-)
colin



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: COMP = false? (was Re: [agi] Advocacy Is no Excuse for Exaggeration)

2008-10-15 Thread Vladimir Nesov
On Wed, Oct 15, 2008 at 2:18 PM, David Hart [EMAIL PROTECTED] wrote:
 On Wed, Oct 15, 2008 at 5:52 PM, Colin Hales [EMAIL PROTECTED]
 wrote:

 So you'll just have to wait. Sorry. I also have patent/IP issues.

 Exactly what qualia am I expected to feel when you say the words
 'Intellectual Property'? (that's a rhetorical question, just in case there
 was any doubt!)

 I'd like to suggest that the COMP=false thread be considered a completely
 mis-placed, undebatable and dead topic on the AGI list.

That'd be great.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread Ben Goertzel
Anyway I think Colin has now clarified his position.

To me, the key point is that he does not believe human-scientist-level
intelligence can be achieved via any digital computer plus robot body
apparatus.

This is a scientifically reasonable hypothesis, which has been made by Roger
Penrose and others before.

However, in my view it is incorrect to state that evidence for this
hypothesis is provided by any results in contemporary cognitive science or
neuroscience. I defy you to give me any neuroscience or cog sci result that
cannot be clearly explained using computable physics.

Also, it must be noted that, as Deutsch showed, any behavior that can be
achieved using quantum systems, can be achieved using standard digital
computers (though the digital computers may be slower).  But perhaps Colin
wants to push back against Deutsch's argument by questioning its
assumptions?

There are many things in neuroscience and cognitive science that are not
well explained by **current** computation-based analyses, of course.  But,
notable is that weird-physics-based analyses **do not currently provide
better explanations of these phenomena**.

Colin notes that we do not have a good, detailed explanation of how
scientific creativity emerges from computational processes.  OK.  I tried to
give such an explanation in From Complexity to Creativity, but of course
whether my explanation is right, is subject to debate.

However, I submit that the computational approach has given far BETTER, more
detailed, more useful explanations of creativity than any weird-physics
based approach, so far.

Whether philosophy of consciousness requires us to assert that weird
physics is required to implement machine consciousness, is a whole other
question.  However the arguments in this regard are certainly shaky, as the
vast majority of (presumbly conscious) people who understand the relevant
physics and have though through the issues, do not agree with the
assertion

Or maybe Colin is the only one of us who is conscious? ;-) ... and, the
reason why we don't understand that consciousness is unachievable on digital
computers is that we lack qualia and don't ourselves know what consciousness
really is???  ;-)) ... In that case, he is wasting his time arguing with a
bunch of zombies!

-- Ben G

On Wed, Oct 15, 2008 at 9:33 AM, Mike Tintner [EMAIL PROTECTED]wrote:


 Trent : If you disagree with my paraphrasing of your opinion Colin, please

 feel free to rebut it *in plain english* so we can better figure out
 what the hell you're on about.


 Well, I agree that Colin hasn't made clear what he stands for
 [neo-]computationally. But perhaps he is doing us a service, in making clear
 how neuroscientific opinion is changing? I must confess I didn't know re
 integrative neuroscience. So there is something important to be explored
 here - how much *is* science (and cog sci) changing its computational
 paradigm?

 Basically, you guys are in general blinkering yourselves to the fact that
 the brain clearly works *fundamentally differently* to any computer - in
 major ways.

 Colin may not have succeeded in fully identifying or translating those
 differences into any useful mechanical form [or not - I'm certainly
 interested to hear more]. But sooner or later *someone* will.

 And it's a safe bet that cog. sci. which still largely underpins your
 particular computational view of mind, will v. soon sweep the rug from under
 your feet. If I were you, I'd explore more here.

 (The parallels between a vastly overleveraged financial, economic 
 political world order suddenly collapsing and a similarly overleveraged (in
 their claims) cog. sci and AGI also on the verge of collapse, should not
 escape you).



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: COMP = false? (was Re: [agi] Advocacy Is no Excuse for Exaggeration)

2008-10-15 Thread Ben Goertzel
Actually, I think COMP=false is a perfectly valid subject for discussion on
this list.

However, I don't think discussions of the form I have all the answers, but
they're top-secret and I'm not telling you, hahaha are particularly useful.

So, speaking as a list participant, it seems to me this thread has probably
met its natural end, with this reference to proprietary weird-physics IP.

However, speaking as list moderator, I don't find this thread so off-topic
or unpleasant as to formally kill the thread.

-- Ben

On Wed, Oct 15, 2008 at 6:18 AM, David Hart [EMAIL PROTECTED] wrote:

 On Wed, Oct 15, 2008 at 5:52 PM, Colin Hales [EMAIL PROTECTED]
  wrote:


 So you'll just have to wait. Sorry. I also have patent/IP issues.


 Exactly what qualia am I expected to feel when you say the words
 'Intellectual Property'? (that's a rhetorical question, just in case there
 was any doubt!)

 I'd like to suggest that the COMP=false thread be considered a completely
 mis-placed, undebatable and dead topic on the AGI list. Maybe people who
 like Chinese Rooms will sign up for the new COMP=false list...

 -dave
  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread Mike Tintner

  Ben: I defy you to give me any neuroscience or cog sci result that cannot be 
clearly explained using computable physics.


  Ben,

  As discussed before, no current computational approach can replicate the 
brain's ability to produce a memory in what we can be v. confident are only a 
few neuronal steps - by comparison with computers which often take millions of 
steps. This is utterly central to general intelligence.and the capacity to 
produce analogies/metaphors etc. The brain seems to work by recall (if I've 
got the right term) as opposed to *search.* (And Hawkins argues that the entire 
brain is a memory system - memories are stored everywhere).

  That indicates a radically different computer to any we have.

  Ben:Colin notes that we do not have a good, detailed explanation of how 
scientific creativity emerges from computational processes.  OK.  I tried to 
give such an explanation in From Complexity to Creativity, but of course 
whether my explanation is right, is subject to debate.  

  Ben,

  I still intend to reply to your creativity post, but perhaps you/d care to at 
least label what your explanation of scientific creativity is - I'm not aware 
of your explaining, or connecting up any of the theories you explore - in any 
*direct* way with any creative process at all. My brief reading is that you 
indicate a loose, possible connection, but nothing direct - as your final 
Conclusion seems to confirm:

  I14.7 CONCLUSION 
  The phenomenon of creativity is a challenge for the psynet model, and for 
complexity science as a whole.

  Are you claiming you have any ideas here that anyone is paying attention to, 
or should?







--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread Ben Goertzel
Mike:


Ben: I defy you to give me any neuroscience or cog sci result that cannot be
clearly explained using computable physics.

Ben, As discussed before, no current computational approach can replicate
the brain's ability to produce a memory in what we can be v. confident are
only a few neuronal steps - by comparison with computers which often take
millions of steps.


Actually, Hopfield neural net associative memory models do exactly that.
They are computational models.

Also, a few neuronal steps doesn't mean much -- a neuron is a complex
dynamical system, so a lot of things are going on each time a neuron fires.
See e.g. Izhikevich's models of neuronal dynamics.

http://vesicle.nsi.edu/users/izhikevich/human_brain_simulation/Blue_Brain.htm


***
I still intend to reply to your creativity post, but perhaps you/d care to
at least label what your explanation of scientific creativity is - I'm not
aware of your explaining, or connecting up any of the theories you explore -
in any *direct* way with any creative process at all. My brief reading is
that you indicate a loose, possible connection, but nothing direct -
***

What is the use of applying a handy label to a complex theory, in this
context?  So you can then argue against it based on the label, rather than
the actual ideas?

I have no idea what you mean by a direct connection.  I try to give an
explanation of the cognitive dynamics underlying acts of creativity.  I'm
happy to discuss the specifics of my explanation and why you think it's
inadequate (if you do).  If you don't have time to read the specifics,
that's fine, but I don't have time to summarize all that stuff I already
wrote in emails either ;-p

ben


On Wed, Oct 15, 2008 at 10:40 AM, Mike Tintner [EMAIL PROTECTED]wrote:



 Ben: I defy you to give me any neuroscience or cog sci result that cannot
 be clearly explained using computable physics.

 Ben,

 As discussed before, no current computational approach can replicate the
 brain's ability to produce a memory in what we can be v. confident are only
 a few neuronal steps - by comparison with computers which often take
 millions of steps. This is utterly central to general intelligence.and the
 capacity to produce analogies/metaphors etc. The brain seems to work by
 recall (if I've got the right term) as opposed to *search.* (And Hawkins
 argues that the entire brain is a memory system - memories are stored
 everywhere).

 That indicates a radically different computer to any we have.

 Ben:Colin notes that we do not have a good, detailed explanation of how
 scientific creativity emerges from computational processes.  OK.  I tried to
 give such an explanation in From Complexity to Creativity, but of course
 whether my explanation is right, is subject to debate.

 Ben,

 I still intend to reply to your creativity post, but perhaps you/d care to
 at least label what your explanation of scientific creativity is - I'm not
 aware of your explaining, or connecting up any of the theories you explore -
 in any *direct* way with any creative process at all. My brief reading is
 that you indicate a loose, possible connection, but nothing direct - as your
 final Conclusion seems to confirm:

 I*14.7 CONCLUSION*

 The phenomenon of creativity is a challenge for the psynet model, and
 for complexity science as a whole.

 Are you claiming you have any ideas here that anyone is paying attention
 to, or should?




  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com

 --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: COMP = false? (was Re: [agi] Advocacy Is no Excuse for Exaggeration)

2008-10-15 Thread Jim Bromer
On Wed, Oct 15, 2008 at 10:14 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Actually, I think COMP=false is a perfectly valid subject for discussion on
 this list.

 However, I don't think discussions of the form I have all the answers, but
 they're top-secret and I'm not telling you, hahaha are particularly useful.

 So, speaking as a list participant, it seems to me this thread has probably
 met its natural end, with this reference to proprietary weird-physics IP.

 However, speaking as list moderator, I don't find this thread so off-topic
 or unpleasant as to formally kill the thread.

 -- Ben

If someone doesn't want to get into a conversation with Colin about
whatever it is that he is saying, then they should just exercise some
self-control and refrain from doing so.

I think Colin's ideas are pretty far out there. But that does not mean
that he has never said anything that might be useful.

My offbeat topic, that I believe that the Lord may have given me some
direction about a novel approach to logical satisfiability that I am
working on, but I don't want to discuss the details about the
algorithms until I have gotten a chance to see if they work or not,
was never intended to be a discussion about the theory itself.  I
wanted to have a discussion about whether or not a good SAT solution
would have a significant influence on AGI, and whether or not the
unlikely discovery of an unexpected breakthrough on SAT would serve as
rational evidence in support of the theory that the Lord helped me
with the theory.

Although I am skeptical about what I think Colin is claiming, there is
an obvious parallel between his case and mine.  There are relevant
issues which he wants to discuss even though his central claim seems
to private, and these relevant issues may be interesting.

Colin's unusual reference to some solid path which cannot be yet
discussed is annoying partly because it so obviously unfounded.  If he
had the proof (or a method), then why isn't he writing it up (or
working it out).  A similar argument was made against me by the way,
but the difference was that I never said that I had the proof or
method.  (I did say that you should get used to a polynomial time
solution to SAT but I never said that I had a working algorithm.)

My point is that even though people may annoy you with what seems like
unsubstantiated claims, that does not disqualify everything they have
said. That rule could so easily be applied to anyone who posts on that
list.

Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread Mike Tintner
Ben: I don't have time to summarize all that stuff I already wrote in emails 
either ;-p

Ben,

I asked you to at least *label* what your explanation of scientific 
creativity is.. Just a label, Ben.  Books that are properly organized and 
constructed (and sell), usually do have clearly labelled theories, which they 
hang their book around. It isn't clear what your book's is.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread BillK
On Wed, Oct 15, 2008 at 9:46 AM, Eric Burton wrote:
 My mistake I guess. I'm going to try harder to understand what you're
 saying from now on.


Colin's profile on Nature says:

I am a mature age PhD student with the sole intent of getting a novel
chip technology and derivative products into commercial production.
The chip technology facilitates natural learning of the kind biology
uses to adapt to novelty. The artifacts will have an internal life.

My mission is to create artificial (machines) that learn like biology
learns and that have an internal life. Currently that goal requires
lipid bilayer membrane molecular dynamics simulation.

Publications
  Colin Hales. AI and Science's Lost Realm IEEE Intelligent
Systems 21 , 76-81 (2006)
  Colin Hales. Physiology meets consciousness. A review of The
Primordial Emotions: The Dawning of Consciousness by Derek Denton
TRAFFIC EIGHT (2006)
  Hales, C. Qualia Ockham's Razor, Radio National, Australia 17 April (2005)
  Colin Hales. The 10 point framework and the altogether too hard
basket Science and Consciousness Review (2003)
---


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: COMP = false? (was Re: [agi] Advocacy Is no Excuse for Exaggeration)

2008-10-15 Thread David Hart
On Wed, Oct 15, 2008 at 5:52 PM, Colin Hales
[EMAIL PROTECTED]wrote:


 So you'll just have to wait. Sorry. I also have patent/IP issues.


Exactly what qualia am I expected to feel when you say the words
'Intellectual Property'? (that's a rhetorical question, just in case there
was any doubt!)

I'd like to suggest that the COMP=false thread be considered a completely
mis-placed, undebatable and dead topic on the AGI list. Maybe people who
like Chinese Rooms will sign up for the new COMP=false list...

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread wannabe
Colin appears to have clarified his position.  It seems to be that
computers cannot be intelligent, and we need some other kind of device for
AGI, which he is working on.

That is a perfectly possible assertion and approach.  Unfortunately, what
Ben try to say as A is kind of an assumption for the list and any
programmer working on AGI, so I'm not sure how valuable Colin will find
this list.  Also, from what I've seen, it's not a position that I think
I've ever seen defended in any convincing way, and I kind of suspect it
can't be.  Indeed, it sets off my crank-alert.  I will try to be as
patient as ever I am, which really isn't much, but I just post this as a
warning.

I do have a positive contribution to make in this conversation, but this
stream has been flowing a little quickly for me to jump in.  Maybe a bit
later.
andi


Colin posted:
 Ben Goertzel wrote:

 I still don't really get it, sorry... ;-(

 Are you saying

 A) that a conscious, human-level AI **can** be implemented on an
 ordinary Turing machine, hooked up to a robot body

 or

 B) A is false

 B)

 Yeah that about does it.

 Specifically: It will never produce an original scientific act on the
 a-priori unknown. It is the unknown bit which is important. You can't
 deliver a 'model' of the unknown that delivers all of the aspects of the
 unknown without knowing it all already!catch 22...you have to be
 exposed /directly/ to all the actual novelty in the natural world, not
 the novelty recognised by a model of what novelty is. Consciousness
 (P-consciousness and specifically and importantly visual
 P-consciousness) is the mechanism by which novelty in the actual DISTANT
 natural world is made apparent to the agent. Symbolic grounding in
 Qualia NOT I/O. You do not get that information through your retina
 data. You get it from occipital visual P-consciousness.  The Turing
 machine abstracts the mechanism of access to the distal natural world
 and hence has to be informed by a model, which you don't have...

 Because scientific behaviour is just a (formal, very testable)
 refinement of everyday intelligent behaviour, everyday intelligent
 behaviour of the kind humans have - goes down the drain with it.

 With the TM precluded from producing a scientist, it is precluded as a
 mechanism for AGI.

 I like scientific behaviour. A great clarifier.

 cheers
 colin








 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread Eric Burton
I suppose it's a bit ambiguous. There's computer modelling of mind, and then
there's the implementation of an actual mind using actual computation, then
there's the implementation of a brain using computation, in which a mind may
be said to be operating. All sorts of misdirection.

I think IBM is working on what you want to see. Also take a look at
http://www.intelligencerealm.com/aisystem/system.php


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread Ben Goertzel
Books that present theories out of the mainstream, don't always fit into the
recognized systems of labels very comfortably ;-)

Such books may indeed not sell well, but short-term profitability is not a
good way of judging the soundness of a set of ideas.

I'll try my hand at a summary phrase you might understand: I view
human-style creativity as an emergent phenomenon that arises in certain
complex systems due to their internal
self-organization/autopoiesis/evolution and their coupling with the world.
I view it as related to the emergent phenomenon of self, and in cases of
extreme creativity, related to the phenomenon of subselves (so that
intense creative activity may be carried out by a dedicated subself).  I
view the dynamics of creativity as a balance between dynamics of autopoiesis
and evolution in self-reinforcing/self-generating mental subsystems.
Intuitively, I validate this idea by comparing it to the reported subjective
experiences of creative people throughout history.  Computational algorithms
like genetic algorithms and attractor neural nets, and mathematical
phenomena like Mandelbrot sets, are in some ways analogous to some aspects
of the dynamics of human-style creativity, though there are also significant
differences.

That is not a label but  maybe gives you some idea of my direction of
thought.

The ideas presented in Complexity to Creativity are followed up more
extensively in The Hidden Pattern.

-- Ben G



On Wed, Oct 15, 2008 at 11:03 AM, Mike Tintner [EMAIL PROTECTED]wrote:

  Ben: I don't have time to summarize all that stuff I already wrote in
 emails either ;-p
 Ben,

 I asked you to at least *label* what your explanation of scientific
 creativity is.. Just a label, Ben.  Books that are properly organized and
 constructed (and sell), usually do have clearly labelled theories, which
 they hang their book around. It isn't clear what your book's is.
 --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread Eric Burton
On Wed, Oct 15, 2008 at 6:26 AM, Colin Hales
[EMAIL PROTECTED] wrote:
 Hi,
 I am aware of 'blue brain'. It, and the distributed processor in the other
 link are still COMP and therefore subject to all the arguments I have been
 making, and therefore not on the path  to real AGI. It's interesting that
 the 'reverse-engineering' of the brain does not say what slice across the
 matter hierarchy they operate at...atomic , molecular, organelle, cell,
 brain region... not terribly clear. We do not possess enough computational
 power on earth to simulate even a small part of 1 cell, let along a whole
 brain. Not that I want to simulate anything anyway! :-)

My mistake I guess. I'm going to try harder to understand what you're
saying from now on.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread Trent Waddington
On Wed, Oct 15, 2008 at 9:59 PM,  [EMAIL PROTECTED] wrote:
 Also, from what I've seen, it's not a position that I think
 I've ever seen defended in any convincing way, and I kind of suspect it
 can't be.  Indeed, it sets off my crank-alert.

Yes, thank you.

If I can summarize Colin's opinion, without resorting to 150 years of
biology hand waving:

* human brains perceive reality not via senses, but via magic stuff in
the brain which you'll never learn and can never duplicate.. unless,
of course, you listen to what I think and implement my special magic
hardware, which is clearly superior and don't even try trotting out
that stuff about all computation being universal cause this stuff is
*really new* and therefore better.

* Oh, and even though every good scientist recognizes the importance
of using instruments and measurement in experiments, that stuff in no
way implies that the way humans see the world is more of a *hindrance*
to the study of said world.. in fact, it's so completely necessary
that without it an intelligence can never be intelligent.

If you disagree with my paraphrasing of your opinion Colin, please
feel free to rebut it *in plain english* so we can better figure out
what the hell you're on about.

Sheesh.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread Mike Tintner


Trent : If you disagree with my paraphrasing of your opinion Colin, please

feel free to rebut it *in plain english* so we can better figure out
what the hell you're on about.



Well, I agree that Colin hasn't made clear what he stands for 
[neo-]computationally. But perhaps he is doing us a service, in making clear 
how neuroscientific opinion is changing? I must confess I didn't know re 
integrative neuroscience. So there is something important to be explored 
here - how much *is* science (and cog sci) changing its computational 
paradigm?


Basically, you guys are in general blinkering yourselves to the fact that 
the brain clearly works *fundamentally differently* to any computer - in 
major ways.


Colin may not have succeeded in fully identifying or translating those 
differences into any useful mechanical form [or not - I'm certainly 
interested to hear more]. But sooner or later *someone* will.


And it's a safe bet that cog. sci. which still largely underpins your 
particular computational view of mind, will v. soon sweep the rug from under 
your feet. If I were you, I'd explore more here.


(The parallels between a vastly overleveraged financial, economic  
political world order suddenly collapsing and a similarly overleveraged (in 
their claims) cog. sci and AGI also on the verge of collapse, should not 
escape you). 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: COMP = false? (was Re: [agi] Advocacy Is no Excuse for Exaggeration)

2008-10-15 Thread Eric Burton
but I don't want to discuss the details about the
algorithms until I have gotten a chance to see if they work or not,

Hearing this makes my teeth gnash. GO AND IMPLEMENT THEM. THEN TELL US

On 10/15/08, Colin Hales [EMAIL PROTECTED] wrote:


 David Hart wrote:
 On Wed, Oct 15, 2008 at 5:52 PM, Colin Hales
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
 wrote:


 So you'll just have to wait. Sorry. I also have patent/IP issues.


 Exactly what qualia am I expected to feel when you say the words
 'Intellectual Property'? (that's a rhetorical question, just in case
 there was any doubt!)

 I'd like to suggest that the COMP=false thread be considered a
 completely mis-placed, undebatable and dead topic on the AGI list.
 Maybe people who like Chinese Rooms will sign up for the new
 COMP=false list...

 -dave
 Hi,
 The attendees here would like to be involved in the parenthood of real
 AGI, yes?

 I am being rather forthright in scientifically suggesting an approach to
 that outcome focussed entirely on COMP may not achieve that goal, and
 that a diversity of views is needed...and I have a non-COMP approach
 which is possibly a way to AGI.

 I know may claims have not been scientifically backed up. I will fix that.

 The fact is - COMP has already been refuted twice in print. I will be
 adding 2 more refutations. That is already 2 counts that make term
 COMP-AGI  an oxymoron. COMP was always a conjecture and has never been
 proven. The only recent assessment in the literature ends with the words
 Computationalism is dead.  Basic common sense dictates that if you are
 really keen on real AGI that is scientifically viable, then a diversity
 of approaches is advisable. According to Ben that seems to be the way of
 the group as a whole. I take some comfort from this. The  necessary
 diversity requires all manner of multidisciplinary scientists become
 interested and contribute. I intend to be one of those.

 So having 'shaken the tree' I'll leave it at that for now. I'll come
 back with publications to discuss and we can pick up the science of AGI
 from there. The first paper will be an objective test for
 P-consciousness in an artificial agent. A test I hope everyone's AGI
 candidates will be subjected toso, back to work for me.

 enjoy.

 regards
 Colin Hales

 
 *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | Modify
 https://www.listbox.com/member/?;
 Your Subscription[Powered by Listbox] http://www.listbox.com




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Ben Goertzel
Hi,

My main impression of the AGI-08 forum was one of over-dominance by
 singularity-obsessed  and COMP thinking, which must have freaked me out a
 bit.


This again is completely off-base ;-)

COMP, yes ... Singularity, no.  The Singularity was not a theme of AGI-08
and the vast majority of participant researchers are not seriously into
Singularitarianism, futurism, and so forth.

There was a post-conference workshop on the Future of AGI, which about half
of the conference attendees attended, at which the Singularity and related
issues were discussed, among other issues.  For instance, the opening talk
at the workshop was given by Natasha Vita-More, who so far as I know is not
a Singularitarian per se, though an excellent futurist.  And one of the more
vocal folks in the discussions in the workshop was Selmer Bringsjord, who
believes COMP is false and has a different theory of intelligence than you
or me, tied into his interest in Christian philosophy.


 The only reason for not connecting consciousness with AGI is a situation
 where one can see no mechanism or role for it.



Seeing a mechanism or role for consciousness requires a specific theory of
consciousness that not everybody holds --- and as you surely know, not even
everyone in the machine consciousness community holds.

Personally I view the first-person, second-person and third-person views as
different perspectives on the universe, so I think it's a category error to
talk about mechanisms of consciousness ... though one can talk about
mechanisms that are correlated with particularly intense consciousness,
for example.

See my presentation from the Nokia workshop on Machine Consciousness in
August ... where I was the only admitted panpsychist ;-)

http://goertzel.org/NokiaMachineConsciousness.ppt

-- Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Terren Suydam

Hi Colin,

Are there other forums or email lists associated with some of the other AI 
communities you mention?  I've looked briefly but in vain ... would appreciate 
any helpful pointers.

Thanks,
Terren

--- On Tue, 10/14/08, Colin Hales [EMAIL PROTECTED] wrote:
From: Colin Hales [EMAIL PROTECTED]
Subject: Re: [agi] Advocacy Is no Excuse for Exaggeration
To: agi@v2.listbox.com
Date: Tuesday, October 14, 2008, 12:43 AM




  
Hi Matt,

... The Gamez paper situation is now...erm...resolved. You are right:
the paper doesn't argue that solving consciousness is necessary for
AGI. What has happened recently is a subtle shift  - those involved
simple fail to make claims about the consciousness or otherwise of the
machines! This does not entail that they are not actually working on
it. They are just being cautious...Also, you correctly observe that
solving AGI on a purely computational basis is not prohibited by the
workers involved in the GAMEZ paper.. indeed most of their work assumes
it!... I don't have a problem with this...However...'attributing'
consciousness to it based on its behavior is probably about as
unscientific as it gets. That outcome betrays no understanding whatever
of consciousness, its mechanism or its roleand merely assumes COMP
is true and creates an agreement based on ignorance. This is fatally
flawed non-science. 



[BTW: We need an objective test (I have one - I am waiting for it to
get published...). I'm going to try and see where it's at in that
process. If my test is acceptable then I predict all COMP entrants will
fail, but I'll accept whatever happens... - and external behaviour is
decisive. Bear with me a while till I get it sorted.]



I am still getting to know the folks [EMAIL PROTECTED] And the group may be
diverse, as you say ... but if they are all COMP, then that diversity
is like a group dedicated to an unresolved argument over the colour of
a fish's bicycle. If we can attract the attention of the likes of those
in the GAMEZ paper... and others such as Hynna and Boahen at Stanford,
who have an unusual hardware neural architecture...(Hynna,
K. M. and Boahen, K. 'Thermodynamically equivalent silicon models of
voltage-dependent ion channels', Neural
Computation vol. 19, no. 2, 2007. 327-350.) ...and others ...
then things will be diverse and authoritative. In particular, those who
have recently essentially squashed the computational theories of mind
from a neuroscience perspective- the 'integrative neuroscientists':

 
 Poznanski,
R. R., Biophysical neural networks : foundations of integrative
neuroscience,
Mary Ann Liebert, Larchmont, NY, 2001, pp. viii, 503 p.
Pomerantz, J. R., Topics in integrative
neuroscience : from cells to cognition, Cambridge University Press,
Cambridge,
UK ; New York, 2008, pp. xix, 427 p.
Gordon, E., Ed. (2000). Integrative
neuroscience : bringing together biological, psychological and clinical
models
of the human brain. Amsterdam, Harwood Academic.
 The only working, known model of general
intelligence is the human. If we base AGI on anything that fails to
account scientifically and completely for all aspects of human
cognition, including consciousness, then we open ourselves to critical
inferiority... and the rest of science will simply find the group an
irrelevant cultish backwater. Strategically the group would do well to
make choices that attract the attention of the 'machine consciousness'
crowd - they are directly linked to neuroscience via cog sci. The
crowd that runs with JETAI (journal of theoretical and experimental
artificial intelligence) is also another relevant one. It'd be
nice if those people also saw the AGI journal as a viable repository
for their output. I for one will try and help in that regard. Time will
tell I suppose.

 

cheers,

colin hales





Matt Mahoney wrote:

  --- On Mon, 10/13/08, Colin Hales [EMAIL PROTECTED] wrote:

  
  
In the wider world of science it is the current state of play that the

  
  theoretical basis for real AGI is an open and multi-disciplinary
question.  A forum that purports to be invested in achievement of real
AGI as a target, one would expect that forum to a multidisciplianry
approach on many fronts, all competing scientifically for access to
real AGI. 

I think this group is pretty diverse. No two people here can agree on how to 
build AGI.

  
  
Gamez, D. 'Progress in machine consciousness', Consciousness and

  
  Cognition vol. 17, no. 3, 2008. 887-910.

$31.50 from Science Direct. I could not find a free version. I don't understand 
why an author would not at least post their published papers on their personal 
website. It greatly increases the chance that their paper is cited. I 
understand some publications require you to give up your copyright including 
your right to post your own paper. I refuse to publish with them.

(I don't know the copyright policy for Science Direct, but they are really 
milking the publish or perish mentality of academia. Apparently

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Mike Tintner
Colin:

others such as Hynna and Boahen at Stanford, who have an unusual hardware 
neural architecture...(Hynna, K. M. and Boahen, K. 'Thermodynamically 
equivalent silicon models of voltage-dependent ion channels', Neural 
Computation vol. 19, no. 2, 2007. 327-350.) ...and others ... then things will 
be diverse and authoritative. In particular, those who have recently 
essentially squashed the computational theories of mind from a neuroscience 
perspective- the 'integrative neuroscientists':

Poznanski, R. R., Biophysical neural networks : foundations of integrative 
neuroscience, Mary Ann Liebert, Larchmont, NY, 2001, pp. viii, 503 p.

Pomerantz, J. R., Topics in integrative neuroscience : from cells to cognition, 
Cambridge University Press, Cambridge, UK ; New York, 2008, pp. xix, 427 p.

Gordon, E., Ed. (2000). Integrative neuroscience : bringing together 
biological, psychological and clinical models of the human brain. Amsterdam, 
Harwood 



Colin, 

This all looks v. interesting - googling quickly. The general integrative 
approach to the brain's functioning is clearly v. important. 

*Distinctive Paradigms/Approaches. But are any distinctive models or more 
specific paradigms emerging? It isn't immediately clear why AGI has to pay 
special attention here. Can you do a bit more selling of the importance of this 
field.

*Models - I notice some researchers are developing models of the brain's 
functioning. Are any worthwhile? I called here sometime ago for a Systems 
Psychology and Systems AI, that would be devoted to developing overall models 
both of the intelligent brain and of AGI systems. Existing AGI systems like 
Ben's offer de facto models of what is required for an intelligent mind. So it 
would be v. valuable to be able to compare different models, both natural and 
artificial.

*Embodied Cognitive Science.  How do you see int. neurosci. in relation to 
this? For example, I noted some purely neuronal models of the self. For me, 
only integrated brain-body models of the self are valid.

*Free Will. An interest of mine. I noted some reference that suggested a 
neuroscientific attempt to explain this (or perhaps explain it away). Know any 
more about this?







---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Ben Goertzel
Again, when you say that these neuroscience theories have squashed the
computational theories of mind, it is not clear to me what you mean by the
computational theories of mind.   Do you have a more precise definition of
what you mean?

ben g

On Tue, Oct 14, 2008 at 11:26 AM, Mike Tintner [EMAIL PROTECTED]wrote:

  Colin:

 others such as Hynna and Boahen at Stanford, who have an unusual hardware
 neural architecture...(Hynna, K. M. and Boahen, K. 'Thermodynamically
 equivalent silicon models of voltage-dependent ion channels', *Neural
 Computation* vol. 19, no. 2, 2007. 327-350.) ...and others ... then things
 will be diverse and authoritative. In particular, those who have recently
 essentially squashed the computational theories of mind from a neuroscience
 perspective- the 'integrative neuroscientists':

 Poznanski, R. R., Biophysical neural networks : foundations of integrative
 neuroscience, Mary Ann Liebert, Larchmont, NY, 2001, pp. viii, 503 p.

 Pomerantz, J. R., Topics in integrative neuroscience : from cells to
 cognition, Cambridge University Press, Cambridge, UK ; New York, 2008, pp.
 xix, 427 p.

 Gordon, E., Ed. (2000). Integrative neuroscience : bringing together
 biological, psychological and clinical models of the human brain. Amsterdam,
 Harwood



 Colin,

 This all looks v. interesting - googling quickly. The general integrative
 approach to the brain's functioning is clearly v. important.

 *Distinctive Paradigms/Approaches. But are any distinctive models or more
 specific paradigms emerging? It isn't immediately clear why AGI has to pay
 special attention here. Can you do a bit more selling of the importance of
 this field.

 *Models - I notice some researchers are developing models of the brain's
 functioning. Are any worthwhile? I called here sometime ago for a Systems
 Psychology and Systems AI, that would be devoted to developing overall
 models both of the intelligent brain and of AGI systems. Existing AGI
 systems like Ben's offer de facto models of what is required for an
 intelligent mind. So it would be v. valuable to be able to compare different
 models, both natural and artificial.

 *Embodied Cognitive Science.  How do you see int. neurosci. in relation to
 this? For example, I noted some purely neuronal models of the self. For me,
 only integrated brain-body models of the self are valid.

 *Free Will. An interest of mine. I noted some reference that suggested a
 neuroscientific attempt to explain this (or perhaps explain it away). Know
 any more about this?




 --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


COMP = false? (was Re: [agi] Advocacy Is no Excuse for Exaggeration)

2008-10-14 Thread Matt Mahoney
--- On Tue, 10/14/08, Colin Hales [EMAIL PROTECTED] wrote:

 The only reason for not connecting consciousness with AGI is a
 situation where one can see no mechanism or role for it. That inability
 is no proof there is noneand I have both to the point of having a
 patent in progress.  Yes, I know it's only my claim at the moment...but
 it's behind why I believe the links to machine consciousness are not
 optional, despite the cultural state/history of the field at the moment
 being less than perfect and folks cautiously sidling around
 consciousness like it was bomb under their budgets.

Colin, I read your paper in publication that you were so kind to send me. For 
those who have not seen it, it is a well written, comprehensive survey of 
research in machine consciousness. It does not take a position on whether 
consciousness plays an essential role in AGI. (I understand that taking a 
controversial position probably would have resulted in rejection).

With regard to COMP, I assume you define COMP to be the position that 
everything the mind does is, in principle, computable. If I understand your 
position, consciousness does play a critical role in AGI. However, we don't 
know what it is. Therefore we need to find out by using scientific research, 
then duplicate that process (if possible) in a machine before it can achieve 
AGI.

Here and in your paper, you have not defined what consciousness is. Most 
philosophical arguments can be traced to disagreements about the meanings of 
words. In your paper you say that consciousness means having phenomenal states, 
but you don't define what a phenomenal state is.

Without a definition, we default to what we think it means. Everybody knows 
what consciousness is. It is something that all living humans have. We 
associate consciousness with properties of humans, such as having a name, a 
face, emotions, the ability to communicate in natural language, the ability to 
learn, to behave in ways we expect people to behave, to look like a human. 
Thus, we ascribe partial degrees of consciousness (with appropriate ethical 
treatment) to animals, video game characters, human shaped robots, and teddy 
bears.

To argue your position, you need to nail down a definition of consciousness. 
But that is hard. For example, you could define consciousness as having goals. 
So if a dog wants to go for a walk, it is conscious. But then a thermostat 
wants to keep the room at a set temperature, and a linear regression algorithm 
wants to find the best straight line fit to a set of points.

You could define consciousness as the ability to experience pleasure and pain. 
But then you need a test to distinguish experience from mere reaction, or else 
I could argue that simple reinforcement learners like 
http://www.mattmahoney.net/autobliss.txt experience pain. It boils down to how 
you define experience.

You could define consciousness as being aware of your own thoughts. But again, 
you must define aware. We distinguish conscious or episodic memories, such as 
when I recalled yesterday something that happened last month, and unconscious 
or procedural memories, such as the learned skills in coordinating my leg 
muscles while walking. We can do studies to show that conscious memories are 
stored in the hippocampus and higher layers of the cerebral cortex, and 
unconscious memories are stored in the cerebellum. But that is not really 
helpful for AGI design. The important distinction is that we remember 
remembering conscious memories but not unconscious. Reading from conscious 
memory also writes into it. But I can simulate this process in simple programs, 
for example, a database that logs transactions.

So if you can nail down a definition of consciousness without pointing to a 
human, I am willing to listen. Otherwise we default to the possibility of 
building AGI on COMP principles and then ascribing consciousness to it since it 
behaves just like a human.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales



Ben Goertzel wrote:


Hi,

My main impression of the AGI-08 forum was one of over-dominance
by singularity-obsessed  and COMP thinking, which must have
freaked me out a bit.


This again is completely off-base ;-)


I also found my feeling about -08 as slightly coloured by first hand 
experience from an attendee who came away with the impression I put. 
I'll try and bolt down my paranioa at tad...


COMP, yes ... Singularity, no.  The Singularity was not a theme of 
AGI-08 and the vast majority of participant researchers are not 
seriously into Singularitarianism, futurism, and so forth.
Good, although I'll be vigorously adding non-COMP approaches to the mix, 
and trusting that is OK




There was a post-conference workshop on the Future of AGI, which about 
half of the conference attendees attended, at which the Singularity 
and related issues were discussed, among other issues.  For instance, 
the opening talk at the workshop was given by Natasha Vita-More, who 
so far as I know is not a Singularitarian per se, though an excellent 
futurist.  And one of the more vocal folks in the discussions in the 
workshop was Selmer Bringsjord, who believes COMP is false and has a 
different theory of intelligence than you or me, tied into his 
interest in Christian philosophy.



The only reason for not connecting consciousness with AGI is a
situation where one can see no mechanism or role for it.



Seeing a mechanism or role for consciousness requires a specific 
theory of consciousness that not everybody holds --- and as you surely 
know, not even everyone in the machine consciousness community holds.


Personally I view the first-person, second-person and third-person 
views as different perspectives on the universe, so I think it's a 
category error to talk about mechanisms of consciousness ... though 
one can talk about mechanisms that are correlated with particularly 
intense consciousness, for example.


See my presentation from the Nokia workshop on Machine Consciousness 
in August ... where I was the only admitted panpsychist ;-)


http://goertzel.org/NokiaMachineConsciousness.ppt
ouch 10MB safely squirreled away under GforGoertzel, thank goodness for 
the uni bandwidth.. :-)


I think I rest my case. You cannot see a physical mechanism or a role. I 
can.


Inventing/adopting a whole mental rationale that avoids the problem 
based on an assumption about a 'received view'  is not something I can 
do...I have a real physical process I can point to objectively, and a 
perspective from which it makes perfect sense that it be responsible for 
a first person perspective of the kind we receive.and I 
can't/won't talk it away just because 'Ben said so', even when the 
'category error' stick, is wielded. That old rubric excuse for an 
argument doesn't scare me a bit ... :-)  Consciousness is a problem for 
a reason, and that reason is mostly us thinking our 'categories' are right.


Interestingly, my model, if you stand back and squint a bit, can be 
interpreted as having an 'as-if pan-psychism was real' appearance. Only 
an appearance tho. It's not real.


Anyway... let's just let my story unfold, eh? It's a big one, so it'll 
take a while. Fun to be had!


Thanks for the 'Hidden Pattern' link... I shall digest it.

cheers
colin




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Ben Goertzel
OK, but you have not yet explained what your theory of consciousness is, nor
what the physical mechanism nor role for consciousness that you propose is
... you've just alluded obscurely to these things.  So it's hard to react
except with raised eyebrows and skepticism!!

ben g

On Tue, Oct 14, 2008 at 5:27 PM, Colin Hales
[EMAIL PROTECTED]wrote:



 Ben Goertzel wrote:


 Hi,

  My main impression of the AGI-08 forum was one of over-dominance by
 singularity-obsessed  and COMP thinking, which must have freaked me out a
 bit.


 This again is completely off-base ;-)


 I also found my feeling about -08 as slightly coloured by first hand
 experience from an attendee who came away with the impression I put. I'll
 try and bolt down my paranioa at tad...


 COMP, yes ... Singularity, no.  The Singularity was not a theme of AGI-08
 and the vast majority of participant researchers are not seriously into
 Singularitarianism, futurism, and so forth.

 Good, although I'll be vigorously adding non-COMP approaches to the mix,
 and trusting that is OK


 There was a post-conference workshop on the Future of AGI, which about half
 of the conference attendees attended, at which the Singularity and related
 issues were discussed, among other issues.  For instance, the opening talk
 at the workshop was given by Natasha Vita-More, who so far as I know is not
 a Singularitarian per se, though an excellent futurist.  And one of the more
 vocal folks in the discussions in the workshop was Selmer Bringsjord, who
 believes COMP is false and has a different theory of intelligence than you
 or me, tied into his interest in Christian philosophy.


 The only reason for not connecting consciousness with AGI is a situation
 where one can see no mechanism or role for it.



 Seeing a mechanism or role for consciousness requires a specific theory
 of consciousness that not everybody holds --- and as you surely know, not
 even everyone in the machine consciousness community holds.

 Personally I view the first-person, second-person and third-person views as
 different perspectives on the universe, so I think it's a category error to
 talk about mechanisms of consciousness ... though one can talk about
 mechanisms that are correlated with particularly intense consciousness,
 for example.

 See my presentation from the Nokia workshop on Machine Consciousness in
 August ... where I was the only admitted panpsychist ;-)

 http://goertzel.org/NokiaMachineConsciousness.ppt

 ouch 10MB safely squirreled away under GforGoertzel, thank goodness for the
 uni bandwidth.. :-)

 I think I rest my case. You cannot see a physical mechanism or a role. I
 can.

 Inventing/adopting a whole mental rationale that avoids the problem based
 on an assumption about a 'received view'  is not something I can do...I have
 a real physical process I can point to objectively, and a perspective from
 which it makes perfect sense that it be responsible for a first person
 perspective of the kind we receive.and I can't/won't talk it away
 just because 'Ben said so', even when the 'category error' stick, is
 wielded. That old rubric excuse for an argument doesn't scare me a bit ...
 :-)  Consciousness is a problem for a reason, and that reason is mostly us
 thinking our 'categories' are right.

 Interestingly, my model, if you stand back and squint a bit, can be
 interpreted as having an 'as-if pan-psychism was real' appearance. Only an
 appearance tho. It's not real.

 Anyway... let's just let my story unfold, eh? It's a big one, so it'll take
 a while. Fun to be had!

 Thanks for the 'Hidden Pattern' link... I shall digest it.

 cheers
 colin

  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Derek Zahn
I am reminded of this:
 
http://www.serve.com/bonzai/monty/classics/MissAnneElk



Date: Tue, 14 Oct 2008 17:14:39 -0400From: [EMAIL PROTECTED]: [EMAIL 
PROTECTED]: Re: [agi] Advocacy Is no Excuse for Exaggeration
OK, but you have not yet explained what your theory of consciousness is, nor 
what the physical mechanism nor role for consciousness that you propose is ... 
you've just alluded obscurely to these things.  So it's hard to react except 
with raised eyebrows and skepticism!!ben g


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales



Ben Goertzel wrote:


OK, but you have not yet explained what your theory of consciousness 
is, nor what the physical mechanism nor role for consciousness that 
you propose is ... you've just alluded obscurely to these things.  So 
it's hard to react except with raised eyebrows and skepticism!!


ben g

Of course... that's only to be expected at this stage. It can't be helped.
The physical mechanism is easy: quantum electrodynamics.
The tricky bit is the perspective from which it can be held accountable 
for subjective qualities.


ahem!... this is my theory... ahem!...
:-)
A. Elk...



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales

Hi Terren,
They are not 'communities' in the sense that you mean. They are labs in 
various institutions that work on M/C-consciousness (or pretend to be 
doing cog sci, whilst actually doing it :-). All I can do is point you 
at the various references in the paper and get you to keep an eye on 
them. Not terribly satisfactory, but...well that's the way it is. It is 
why I was quite interested in the AGI forum...it's a potential nexus for 
the whole lot of us.

regards
Colin

Terren Suydam wrote:


Hi Colin,

Are there other forums or email lists associated with some of the 
other AI communities you mention?  I've looked briefly but in vain ... 
would appreciate any helpful pointers.


Thanks,
Terren

--- On *Tue, 10/14/08, Colin Hales /[EMAIL PROTECTED]/* 
wrote:


From: Colin Hales [EMAIL PROTECTED]
Subject: Re: [agi] Advocacy Is no Excuse for Exaggeration
To: agi@v2.listbox.com
Date: Tuesday, October 14, 2008, 12:43 AM

Hi Matt,
... The Gamez paper situation is now...erm...resolved. You are
right: the paper doesn't argue that solving consciousness is
necessary for AGI. What has happened recently is a subtle shift  -
those involved simple fail to make claims about the consciousness
or otherwise of the machines! This does not entail that they are
not actually working on it. They are just being cautious...Also,
you correctly observe that solving AGI on a purely computational
basis is not prohibited by the workers involved in the GAMEZ
paper.. indeed most of their work assumes it!... I don't have a
problem with this...However...'attributing' consciousness to it
based on its behavior is probably about as unscientific as it
gets. That outcome betrays no understanding whatever of
consciousness, its mechanism or its roleand merely assumes
COMP is true and creates an agreement based on ignorance. This is
fatally flawed non-science.

[BTW: We need an objective test (I have one - I am waiting for it
to get published...). I'm going to try and see where it's at in
that process. If my test is acceptable then I predict all COMP
entrants will fail, but I'll accept whatever happens... - and
external behaviour is decisive. Bear with me a while till I get it
sorted.]

I am still getting to know the folks [EMAIL PROTECTED] And the group may
be diverse, as you say ... but if they are all COMP, then that
diversity is like a group dedicated to an unresolved argument over
the colour of a fish's bicycle. If we can attract the attention of
the likes of those in the GAMEZ paper... and others such as Hynna
and Boahen at Stanford, who have an unusual hardware neural
architecture...(Hynna, K. M. and Boahen, K. 'Thermodynamically
equivalent silicon models of voltage-dependent ion channels',
/Neural Computation/ vol. 19, no. 2, 2007. 327-350.) ...and others
... then things will be diverse and authoritative. In particular,
those who have recently essentially squashed the computational
theories of mind from a neuroscience perspective- the 'integrative
neuroscientists':

Poznanski, R. R., Biophysical neural networks : foundations of
integrative neuroscience, Mary Ann Liebert, Larchmont, NY, 2001,
pp. viii, 503 p.

Pomerantz, J. R., Topics in integrative neuroscience : from cells
to cognition, Cambridge University Press, Cambridge, UK ; New
York, 2008, pp. xix, 427 p.

Gordon, E., Ed. (2000). Integrative neuroscience : bringing
together biological, psychological and clinical models of the
human brain. Amsterdam, Harwood Academic.

The only working, known model of general intelligence is the
human. If we base AGI on anything that fails to account
scientifically and completely for /all/ aspects of human
cognition, including consciousness, then we open ourselves to
critical inferiority... and the rest of science will simply find
the group an irrelevant cultish backwater. Strategically the group
would do well to make choices that attract the attention of the
'machine consciousness' crowd - they are directly linked to
neuroscience via cog sci. The crowd that runs with JETAI (journal
of theoretical and experimental artificial intelligence) is also
another relevant one. It'd be nice if those people also saw the
AGI journal as a viable repository for their output. I for one
will try and help in that regard. Time will tell I suppose.

cheers,
colin hales


Matt Mahoney wrote:

--- On Mon, 10/13/08, Colin Hales [EMAIL PROTECTED] wrote:

  

In the wider world of science it is the current state of play that the


theoretical basis for real AGI is an open and multi-disciplinary
question.  A forum that purports to be invested in achievement of real
AGI as a target, one would expect that forum to a multidisciplianry
approach on many fronts, all competing

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales

doobelow.

Mike Tintner wrote:

Colin:
 
others such as Hynna and Boahen at Stanford, who have an unusual 
hardware neural architecture...(Hynna, K. M. and Boahen, K. 
'Thermodynamically equivalent silicon models of voltage-dependent ion 
channels', /Neural Computation/ vol. 19, no. 2, 2007. 327-350.) ...and 
others ... then things will be diverse and authoritative. In 
particular, those who have recently essentially squashed the 
computational theories of mind from a neuroscience perspective- the 
'integrative neuroscientists':


Poznanski, R. R., Biophysical neural networks : foundations of 
integrative neuroscience, Mary Ann Liebert, Larchmont, NY, 2001, pp. 
viii, 503 p.


Pomerantz, J. R., Topics in integrative neuroscience : from cells to 
cognition, Cambridge University Press, Cambridge, UK ; New York, 2008, 
pp. xix, 427 p.


Gordon, E., Ed. (2000). Integrative neuroscience : bringing together 
biological, psychological and clinical models of the human brain. 
Amsterdam, Harwood


 


Colin,

This all looks v. interesting - googling quickly. The general 
integrative approach to the brain's functioning is clearly v. important.


A Distinctive Paradigms/Approaches. But are any distinctive models or 
more specific paradigms emerging? It isn't immediately clear why AGI 
has to pay special attention here. Can you do a bit more selling of 
the importance of this field.


I can't overstate the importance of the integrative biology approach. 
There are properties in the electrodynamics of whole collections of 
brain material *which have nothing to do with connectionism*, but are 
intimately and critically involved in the regulatory processes of 
learning. They appear in NO current models of the brain. They are 
visible in the brain when treated as an excitable cell  syncytium and 
involve _all of it_...astrocytes are just as important (maybe more so) 
as neurons. And this includes all forms of connectivity: radiative, 
conductive and via gap junctions, endocrine/genetic regulation  You do 
not get this story unless you treat the whole matter hierarchy as a 
single unified system in all its contexts. Integrative neuroscience is 
the banner under which this kind of work will tie it all together into 
one story.


B Models - I notice some researchers are developing models of the 
brain's functioning. Are any worthwhile? I called here sometime ago 
for a Systems Psychology and Systems AI, that would be devoted to 
developing overall models both of the intelligent brain and of AGI 
systems. Existing AGI systems like Ben's offer de facto models of what 
is required for an intelligent mind. So it would be v. valuable to be 
able to compare different models, both natural and artificial.


There are so many different folks trying so many different approaches to 
brain models/intelligent behaviour/cognition... the only guide I can 
give is that those that are dealing with what is actually there: the 
reality of brain material from a QM/cell biology upwards viewpoint, are 
the only ones on the real path to a complete picture of intelligence. 
Anyone that stops their explorations at some point in the past (say with 
connectionism or some other abstraction) and then dives out of the 
biology with a pet abstraction and starts exploring that avenue alone, 
has impoverished their view of intelligence and is operating on an 
assumption which is open to criticism in a bio-world where nobody can 
claim to have all the answers yet. COMP was an early version of this 
process. Connectionism/Neural nets was the 80s/90s flavour of the same 
thing. Now we are finally getting to whole picture: dynamical systems 
and brain electrodynamics. Walter Freeman's camp is the most 
developed...although all he's attacked empirically is the olfactory 
bulb! So if you must have somewhere to go...he's the man. Many-body 
quentum electrodynamics is the key phrase.


My current research is operating at the computational chemistry level. 
Major holes in knowledge operate even at this most basic atomic level. 
As a result I know that all models around the world are a-priori 
impoverished and therefore open to critical defeat i.e.  I can support 
no-one in their claims as to their model as a trajectory to real AGI. I 
am doing my research precisely because of the impoverishment.


C Embodied Cognitive Science.  How do you see int. neurosci. in 
relation to this? For example, I noted some purely neuronal models of 
the self. For me, only integrated brain-body models of the self are valid.


Self emerges implicitly through embodiment and situatedness. These are 
not optional because specific physics is inherited by that very 
situation. Model it and the physics is gone, along with intelligent 
behaviour.  In my (a Elk theory of consc.!) model, the concept of self 
is so far of no design value. In cog sci generally studying it as a 
phenomenon hasn't lead anywhere useful (that I can build). In a science 
where 'first person' is an 

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Ben Goertzel
About self: you don't like Metzinger's neurophilosophy I presume?  (Being No
One is a masterwork in my view)

I agree that integrative biology is the way to go for understanding brain
function ... and I was talking to Walter Freeman about his work in the early
90's when we both showed up at the Society for Chaos Theory in Psychology
conferences ... however, I am wholly unconvinced that this work implies
anything about the noncomputationality of consciousness.

You mention QED, and I note that the only functions computable according to
QED are the Turing-computable ones.  I wonder how you square this with your
view of QED-based brain dynamics as noncomputable?   Or do you follow the
Penrose path and posit as-yet-undiscovered, mysteriously-noncomputable
quantum-gravity phenomena in brain dynamics (which, I note, requires not
only radical unknown neuroscience but also radical unknown physics and
mathematics)

-- Ben G

On Tue, Oct 14, 2008 at 10:03 PM, Colin Hales
[EMAIL PROTECTED]wrote:

  doobelow.

 Mike Tintner wrote:

 Colin:

 others such as Hynna and Boahen at Stanford, who have an unusual hardware
 neural architecture...(Hynna, K. M. and Boahen, K. 'Thermodynamically
 equivalent silicon models of voltage-dependent ion channels', *Neural
 Computation* vol. 19, no. 2, 2007. 327-350.) ...and others ... then things
 will be diverse and authoritative. In particular, those who have recently
 essentially squashed the computational theories of mind from a neuroscience
 perspective- the 'integrative neuroscientists':

 Poznanski, R. R., Biophysical neural networks : foundations of integrative
 neuroscience, Mary Ann Liebert, Larchmont, NY, 2001, pp. viii, 503 p.

 Pomerantz, J. R., Topics in integrative neuroscience : from cells to
 cognition, Cambridge University Press, Cambridge, UK ; New York, 2008, pp.
 xix, 427 p.

 Gordon, E., Ed. (2000). Integrative neuroscience : bringing together
 biological, psychological and clinical models of the human brain. Amsterdam,
 Harwood



 Colin,

 This all looks v. interesting - googling quickly. The general integrative
 approach to the brain's functioning is clearly v. important.

 A Distinctive Paradigms/Approaches. But are any distinctive models or more
 specific paradigms emerging? It isn't immediately clear why AGI has to pay
 special attention here. Can you do a bit more selling of the importance of
 this field.

 I can't overstate the importance of the integrative biology approach. There
 are properties in the electrodynamics of whole collections of brain material
 *which have nothing to do with connectionism*, but are intimately and
 critically involved in the regulatory processes of learning. They appear in
 NO current models of the brain. They are visible in the brain when treated
 as an excitable cell  syncytium and involve *all of it*...astrocytes are
 just as important (maybe more so) as neurons. And this includes all forms of
 connectivity: radiative, conductive and via gap junctions, endocrine/genetic
 regulation  You do not get this story unless you treat the whole matter
 hierarchy as a single unified system in all its contexts. Integrative
 neuroscience is the banner under which this kind of work will tie it all
 together into one story.

 B Models - I notice some researchers are developing models of the brain's
 functioning. Are any worthwhile? I called here sometime ago for a Systems
 Psychology and Systems AI, that would be devoted to developing overall
 models both of the intelligent brain and of AGI systems. Existing AGI
 systems like Ben's offer de facto models of what is required for an
 intelligent mind. So it would be v. valuable to be able to compare different
 models, both natural and artificial.

 There are so many different folks trying so many different approaches to
 brain models/intelligent behaviour/cognition... the only guide I can give is
 that those that are dealing with what is actually there: the reality of
 brain material from a QM/cell biology upwards viewpoint, are the only ones
 on the real path to a complete picture of intelligence. Anyone that stops
 their explorations at some point in the past (say with connectionism or some
 other abstraction) and then dives out of the biology with a pet abstraction
 and starts exploring that avenue alone, has impoverished their view of
 intelligence and is operating on an assumption which is open to criticism in
 a bio-world where nobody can claim to have all the answers yet. COMP was an
 early version of this process. Connectionism/Neural nets was the 80s/90s
 flavour of the same thing. Now we are finally getting to whole picture:
 dynamical systems and brain electrodynamics. Walter Freeman's camp is the
 most developed...although all he's attacked empirically is the olfactory
 bulb! So if you must have somewhere to go...he's the man. Many-body quentum
 electrodynamics is the key phrase.

 My current research is operating at the computational chemistry level.
 Major holes in 

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales

Ben Goertzel wrote:


Again, when you say that these neuroscience theories have squashed 
the computational theories of mind, it is not clear to me what you 
mean by the computational theories of mind.   Do you have a more 
precise definition of what you mean?


I suppose it's a bit ambiguous. There's computer modelling of mind, and 
then there's the implementation of an actual mind using actual 
computation, then there's the implementation of a brain using 
computation, in which a mind may be said to be operating. All sorts of 
misdirection.


I mean it in the sense given in:
Pylyshyn, Z. W., Computation and cognition : toward a foundation for 
cognitive science, MIT Press, Cambridge, Mass., 1984, pp. xxiii, 292 p.
That is, that a mind is a result of a brain-as-computation. Where 
computation is meant in the sense of abstract symbol manipulation 
according to rules. 'Rules' means any logic or calculii you'd care to 
cite, including any formally specified probablistic/stochastic language. 
This is exactly what I mean by COMP.


Another slant on it:
Poznanski, R. R., Biophysical neural networks : foundations of 
integrative neuroscience, Mary Ann Liebert, Larchmont, NY, 2001, pp. 
viii, 503 p.
The literature has highlighted the conceptual ineptness of the computer 
metaphor of the brain.  Computational neuroscience, which serves as a 
beacon for for the transfer of concepts regarding brain function to 
artificial nets for the design of neural computers, ignores the 
developmental theory of neuronal group selection and therefore seriously 
overestimates the computational nature of neuroscience. It attempts to 
explain brain function in terms of the abstract computational and 
information processing functions thought to be carried out in the brain 
{citations omitted}.


I don't know whether this answers your question,I hope so... it 
means that leaping to a 'brain = computation in the digital computer 
sense, is not what is going on. It also means that a computer model of 
the full structure is also out. You have to do what the brain does, not 
run a model of it. The brain is a electrodynamic entity, so your AGI has 
to be an electrodynamic entity manipulating natural electromagnetic 
symbols in a similar fashion. The 'symbols' are aggregate in the cohorts 
mentioned by Poznanski. The electrodynamics itself IS the 'computation' 
which occurs naturally in the trajectory through in the multidimensional 
vector space of the matter as a whole. Some symbols are experienced 
(qualia) and some are not.


cheers
colin
.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Ben Goertzel
Sure, I know Pylyshyn's work ... and I know very few contemporary AI
scientists who adopt a strong symbol-manipulation-focused view of cognition
like Fodor, Pylyshyn and so forth.  That perspective is rather dated by
now...

But when you say


Where computation is meant in the sense of abstract symbol manipulation
according to rules. 'Rules' means any logic or calculii you'd care to cite,
including any formally specified probablistic/stochastic language. This is
exactly what I mean by COMP.


then things get very very confusing to me.  Do you include a formal neural
net model as computation?  How about a cellular automaton simulation of
QED?  Why is this cellular automaton model not abstract symbol
manipulation?

If you interpret COMP to mean A human-level intelligence can be implemented
on a digital computer or as A human level intelligence can be implemented
on a digital computer connected to a robot body or even as A human level
intelligence, conscious in the same sense that humans are, can be
implemented on a digital computer connected to a robot body ... then I'll
understand you.

But when you start defining COMP in a fuzzy, nebulous way, dismissing some
dynamical systems as too symbolic for your taste (say, probabilistic
logic) and accepting others as subsymbolic enough (say, CA simulations of
QED) ... then I start to feel very confused...

I agree that Fodor and Pylyshyn's approaches, for instance, were too focused
on abstract reasoning and not enough on experiential learning and
grounding.  But I don't think this makes their approaches **more
computational** than a CA model of QED ... it just makes them **bad
computational models of cognition** ...

-- Ben G

On Tue, Oct 14, 2008 at 11:01 PM, Colin Hales
[EMAIL PROTECTED]wrote:

 Ben Goertzel wrote:


 Again, when you say that these neuroscience theories have squashed the
 computational theories of mind, it is not clear to me what you mean by the
 computational theories of mind.   Do you have a more precise definition of
 what you mean?

  I suppose it's a bit ambiguous. There's computer modelling of mind, and
 then there's the implementation of an actual mind using actual computation,
 then there's the implementation of a brain using computation, in which a
 mind may be said to be operating. All sorts of misdirection.

 I mean it in the sense given in:
 Pylyshyn, Z. W., Computation and cognition : toward a foundation for
 cognitive science, MIT Press, Cambridge, Mass., 1984, pp. xxiii, 292 p.
 That is, that a mind is a result of a brain-as-computation. Where
 computation is meant in the sense of abstract symbol manipulation according
 to rules. 'Rules' means any logic or calculii you'd care to cite, including
 any formally specified probablistic/stochastic language. This is exactly
 what I mean by COMP.

 Another slant on it:
 Poznanski, R. R., Biophysical neural networks : foundations of integrative
 neuroscience, Mary Ann Liebert, Larchmont, NY, 2001, pp. viii, 503 p.
 The literature has highlighted the conceptual ineptness of the computer
 metaphor of the brain.  Computational neuroscience, which serves as a beacon
 for for the transfer of concepts regarding brain function to artificial nets
 for the design of neural computers, ignores the developmental theory of
 neuronal group selection and therefore seriously overestimates the
 computational nature of neuroscience. It attempts to explain brain function
 in terms of the abstract computational and information processing functions
 thought to be carried out in the brain {citations omitted}.

 I don't know whether this answers your question,I hope so... it means
 that leaping to a 'brain = computation in the digital computer sense, is
 not what is going on. It also means that a computer model of the full
 structure is also out. You have to do what the brain does, not run a model
 of it. The brain is a electrodynamic entity, so your AGI has to be an
 electrodynamic entity manipulating natural electromagnetic symbols in a
 similar fashion. The 'symbols' are aggregate in the cohorts mentioned by
 Poznanski. The electrodynamics itself IS the 'computation' which occurs
 naturally in the trajectory through in the multidimensional vector space of
 the matter as a whole. Some symbols are experienced (qualia) and some are
 not.

 cheers
 colin
 .






 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify 

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales

Ben Goertzel wrote:


Sure, I know Pylyshyn's work ... and I know very few contemporary AI 
scientists who adopt a strong symbol-manipulation-focused view of 
cognition like Fodor, Pylyshyn and so forth.  That perspective is 
rather dated by now...


But when you say


Where computation is meant in the sense of abstract symbol 
manipulation according to rules. 'Rules' means any logic or calculii 
you'd care to cite, including any formally specified 
probablistic/stochastic language. This is exactly what I mean by COMP.



then things get very very confusing to me.  Do you include a formal 
neural net model as computation?  How about a cellular automaton 
simulation of QED?  Why is this cellular automaton model not abstract 
symbol manipulation?


If you interpret COMP to mean A human-level intelligence can be 
implemented on a digital computer or as A human level intelligence 
can be implemented on a digital computer connected to a robot body or 
even as A human level intelligence, conscious in the same sense that 
humans are, can be implemented on a digital computer connected to a 
robot body ... then I'll understand you.
We're really at cross-purposes here, aren't we?...this is a Colin/Ben 
calibration process :-) OK.


By COMP I mean any abstract symbol manipulation at all in any context. 
The important thing is that in COMP there's a model of some kind of 
learning mechanism being run by a language of some kind or a model of a 
modelling process implemented programmatically. In any event the 
manipulations that are occuring are manipulations of abstract 
representation of numbers according to the language and the model being 
implemented by the computer language.




But when you start defining COMP in a fuzzy, nebulous way, dismissing 
some dynamical systems as too symbolic for your taste (say, 
probabilistic logic) and accepting others as subsymbolic enough 
(say, CA simulations of QED) ... then I start to feel very confused...


I agree that Fodor and Pylyshyn's approaches, for instance, were too 
focused on abstract reasoning and not enough on experiential learning 
and grounding.  But I don't think this makes their approaches **more 
computational** than a CA model of QED ... it just makes them **bad 
computational models of cognition** ...




Maybe a rather stark non-COMP example would help: I would term non-COMP 
approach is /there is no 'model' of cognition being run by anything./ 
The electrodynamics of the matter itself /is the cognition/. Literally. 
No imposed abstract model tells it how to learn. No imposed model is 
populated with any imposed knowledge. No human involvement in any of it 
except construction. Electrodynamic representational objects are being 
manipulated by real natural electrodynamics... is all there is. The 
'computation', if you can call it that, is literally maxwell's equations 
(embedded on a QM substrate, of course) doing their natural dynamics 
dance in real matter, not an abstraction of maxwell's equations being 
run on a computer


In my AGI I have no 'model' of anything. I have the actual thing. A bad 
model of cognition, to me, is identical to a poor understanding of what 
the brain is actually doing. With a good understanding of brain function 
you then actually run the real thing, not a model of it. The trajectory 
of a model of the electrodynamics cannot be the trajectory of the real 
electrodynamics. for the fields inherit behavioural/dynamical properties 
from the deep structure of matter, which are thrown away by the model of 
the electrodynamics. The real electrodynamics is surrounded by the 
matter it is situated in, and operates in accordance with it.


Remember: A scientific model of a natural process cuts a layer across 
the matter hierarchy and throws away all the underlying structure. I am 
putting the entire natural hierarchy back into the picture by using real 
electrodyamics implemented in the fashion of a real brain, not a model 
of the electrodynamics of a real brain or any other abstraction of 
apparent brain operation.


Does that do it? It's very very different to a COMP approach.

cheers
colin





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Ben Goertzel
I still don't really get it, sorry... ;-(

Are you saying

A) that a conscious, human-level AI **can** be implemented on an ordinary
Turing machine, hooked up to a robot body

or

B) A is false

???

If you could clarify this point, I might have an easier time interpreting
your other thoughts?

I have no idea how you are defining such terms as abstract symbol
manipulation or model.  Also, I wonder if these terms have to do with
what a software system does, or with how you personally choose to
analyze/interpret a software system.

ben g

On Wed, Oct 15, 2008 at 1:16 AM, Colin Hales
[EMAIL PROTECTED]wrote:

  Ben Goertzel wrote:


 Sure, I know Pylyshyn's work ... and I know very few contemporary AI
 scientists who adopt a strong symbol-manipulation-focused view of cognition
 like Fodor, Pylyshyn and so forth.  That perspective is rather dated by
 now...

 But when you say

 
 Where computation is meant in the sense of abstract symbol manipulation
 according to rules. 'Rules' means any logic or calculii you'd care to cite,
 including any formally specified probablistic/stochastic language. This is
 exactly what I mean by COMP.
 

 then things get very very confusing to me.  Do you include a formal neural
 net model as computation?  How about a cellular automaton simulation of
 QED?  Why is this cellular automaton model not abstract symbol
 manipulation?

 If you interpret COMP to mean A human-level intelligence can be
 implemented on a digital computer or as A human level intelligence can be
 implemented on a digital computer connected to a robot body or even as A
 human level intelligence, conscious in the same sense that humans are, can
 be implemented on a digital computer connected to a robot body ... then
 I'll understand you.

 We're really at cross-purposes here, aren't we?...this is a Colin/Ben
 calibration process :-) OK.

 By COMP I mean any abstract symbol manipulation at all in any context. The
 important thing is that in COMP there's a model of some kind of learning
 mechanism being run by a language of some kind or a model of a modelling
 process implemented programmatically. In any event the manipulations that
 are occuring are manipulations of abstract representation of numbers
 according to the language and the model being implemented by the computer
 language.


 But when you start defining COMP in a fuzzy, nebulous way, dismissing some
 dynamical systems as too symbolic for your taste (say, probabilistic
 logic) and accepting others as subsymbolic enough (say, CA simulations of
 QED) ... then I start to feel very confused...

 I agree that Fodor and Pylyshyn's approaches, for instance, were too
 focused on abstract reasoning and not enough on experiential learning and
 grounding.  But I don't think this makes their approaches **more
 computational** than a CA model of QED ... it just makes them **bad
 computational models of cognition** ...


 Maybe a rather stark non-COMP example would help: I would term non-COMP
 approach is *there is no 'model' of cognition being run by anything.* The
 electrodynamics of the matter itself *is the cognition*. Literally. No
 imposed abstract model tells it how to learn. No imposed model is populated
 with any imposed knowledge. No human involvement in any of it except
 construction. Electrodynamic representational objects are being manipulated
 by real natural electrodynamics... is all there is. The 'computation', if
 you can call it that, is literally maxwell's equations (embedded on a QM
 substrate, of course) doing their natural dynamics dance in real matter, not
 an abstraction of maxwell's equations being run on a computer

 In my AGI I have no 'model' of anything. I have the actual thing. A bad
 model of cognition, to me, is identical to a poor understanding of what the
 brain is actually doing. With a good understanding of brain function you
 then actually run the real thing, not a model of it. The trajectory of a
 model of the electrodynamics cannot be the trajectory of the real
 electrodynamics. for the fields inherit behavioural/dynamical properties
 from the deep structure of matter, which are thrown away by the model of the
 electrodynamics. The real electrodynamics is surrounded by the matter it is
 situated in, and operates in accordance with it.

 Remember: A scientific model of a natural process cuts a layer across the
 matter hierarchy and throws away all the underlying structure. I am putting
 the entire natural hierarchy back into the picture by using real
 electrodyamics implemented in the fashion of a real brain, not a model of
 the electrodynamics of a real brain or any other abstraction of apparent
 brain operation.

 Does that do it? It's very very different to a COMP approach.

 cheers
 colin


  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales

Ben Goertzel wrote:


About self: you don't like Metzinger's neurophilosophy I presume?  
(Being No One is a masterwork in my view)


I agree that integrative biology is the way to go for understanding 
brain function ... and I was talking to Walter Freeman about his work 
in the early 90's when we both showed up at the Society for Chaos 
Theory in Psychology conferences ... however, I am wholly unconvinced 
that this work implies anything about the noncomputationality of 
consciousness.


You mention QED, and I note that the only functions computable 
according to QED are the Turing-computable ones.  I wonder how you 
square this with your view of QED-based brain dynamics as 
noncomputable?   Or do you follow the Penrose path and posit 
as-yet-undiscovered, mysteriously-noncomputable quantum-gravity 
phenomena in brain dynamics (which, I note, requires not only radical 
unknown neuroscience but also radical unknown physics and mathematics)


-- Ben G
The comment is of the kind when did you stop kicking your dog. You 
assume that dog kicking was an issue and any answer in some way 
verifies/validates my involvement in dog-kicking! No way! :-)


Turing computable or Xthing-computable...is irrelevant. I am not 
'following' anyone except the example of the natural world.There's 
no inventions of mysterious anything... this is in-your-face good old 
natural matter doing what it does. I have spent an entire career being 
beaten to a pulp by the natural world of electromagnetismThis is 
really really simple.


Nature managed to make a human capable of arguing about Turing 
computability and Godellian incompleteness without any 'functions' or 
abstractions or any 'model' of anything! I am following the same natural 
path of actual biology and real electrodynamics of real matter. I have a 
brilliant working prototype: /the human brain/. I am implementing the 
minimal subset of what it actually does, not a model of what it does. I 
have the skills to make an inorganic version of it. I don't need the ATP 
cycle, the full endocrine or inflammatory response and/or other 
immunochemistry systems or any of the genetic overheads. All the 
self-configuration and adaptation/tuning is easy to replicate in 
hardware. When you delete all those overheads what's left is really 
simple. Hooking it to I/O is easy - been doing it for decades...


Of course - like a good little engineer I am scoping out electromagnetic 
effects using computational models. Computational chemistry, in fact. 
Appalling stuff! However, as a result my understanding of the 
electromagnetics of brain material will improve. That will result in 
appropriately engineered real electromagnetics running in my AGI, not a 
model of electromagnetics running in my AGI. Quantum mechanics will be 
doing its bit without me lifting a finger - because i am using natural 
matter as it is used in brain material.


Brilliant tho it was, and as beautiful a piece of science that it was, 
Hodgkin and Huxley threw out the fields in 1952ish and there they 
languish, ignored until now. Putting back in the 50% that was thrown 
away 50 years ago can hardly be considered 'radical' neuroscience. 
Ignoring it for any more than 50 years when you can show it operating 
there for everyone to see...now that'd be radically stupid in anyone's book.


There's also a clinical side: the electrodynamics/field structure can be 
used in explanation of developmental chemistry/cellular transport cues 
and it also sorts out the actual origins of EEG, both of which are 
currently open problems.


It's a little brain-bending to get your head around.. but it'll sink in.

cheers
colin



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales


Ben Goertzel wrote:


I still don't really get it, sorry... ;-(

Are you saying

A) that a conscious, human-level AI **can** be implemented on an 
ordinary Turing machine, hooked up to a robot body


or

B) A is false


B)

Yeah that about does it.

Specifically: It will never produce an original scientific act on the 
a-priori unknown. It is the unknown bit which is important. You can't 
deliver a 'model' of the unknown that delivers all of the aspects of the 
unknown without knowing it all already!catch 22...you have to be 
exposed /directly/ to all the actual novelty in the natural world, not 
the novelty recognised by a model of what novelty is. Consciousness 
(P-consciousness and specifically and importantly visual 
P-consciousness) is the mechanism by which novelty in the actual DISTANT 
natural world is made apparent to the agent. Symbolic grounding in 
Qualia NOT I/O. You do not get that information through your retina 
data. You get it from occipital visual P-consciousness.  The Turing 
machine abstracts the mechanism of access to the distal natural world 
and hence has to be informed by a model, which you don't have...


Because scientific behaviour is just a (formal, very testable) 
refinement of everyday intelligent behaviour, everyday intelligent 
behaviour of the kind humans have - goes down the drain with it.


With the TM precluded from producing a scientist, it is precluded as a 
mechanism for AGI.


I like scientific behaviour. A great clarifier.

cheers
colin








---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: COMP = false? (was Re: [agi] Advocacy Is no Excuse for Exaggeration)

2008-10-14 Thread Colin Hales



Matt Mahoney wrote:

--- On Tue, 10/14/08, Colin Hales [EMAIL PROTECTED] wrote:

  

The only reason for not connecting consciousness with AGI is a
situation where one can see no mechanism or role for it. That inability
is no proof there is noneand I have both to the point of having a
patent in progress.  Yes, I know it's only my claim at the moment...but
it's behind why I believe the links to machine consciousness are not
optional, despite the cultural state/history of the field at the moment
being less than perfect and folks cautiously sidling around
consciousness like it was bomb under their budgets.



Colin, I read your paper in publication that you were so kind to send me. For 
those who have not seen it, it is a well written, comprehensive survey of 
research in machine consciousness. It does not take a position on whether 
consciousness plays an essential role in AGI. (I understand that taking a 
controversial position probably would have resulted in rejection).

With regard to COMP, I assume you define COMP to be the position that 
everything the mind does is, in principle, computable. If I understand your 
position, consciousness does play a critical role in AGI. However, we don't 
know what it is. Therefore we need to find out by using scientific research, 
then duplicate that process (if possible) in a machine before it can achieve 
AGI.
  



Here and in your paper, you have not defined what consciousness is. Most 
philosophical arguments can be traced to disagreements about the meanings of 
words. In your paper you say that consciousness means having phenomenal states, 
but you don't define what a phenomenal state is.

Without a definition, we default to what we think it means. Everybody knows 
what consciousness is. It is something that all living humans have. We associate 
consciousness with properties of humans, such as having a name, a face, emotions, the 
ability to communicate in natural language, the ability to learn, to behave in ways we 
expect people to behave, to look like a human. Thus, we ascribe partial degrees of 
consciousness (with appropriate ethical treatment) to animals, video game characters, 
human shaped robots, and teddy bears.

To argue your position, you need to nail down a definition of consciousness. 
But that is hard. For example, you could define consciousness as having goals. 
So if a dog wants to go for a walk, it is conscious. But then a thermostat 
wants to keep the room at a set temperature, and a linear regression algorithm 
wants to find the best straight line fit to a set of points.

You could define consciousness as the ability to experience pleasure and pain. But then 
you need a test to distinguish experience from mere reaction, or else I could argue that 
simple reinforcement learners like http://www.mattmahoney.net/autobliss.txt experience 
pain. It boils down to how you define experience.

You could define consciousness as being aware of your own thoughts. But again, you must 
define aware. We distinguish conscious or episodic memories, such as when I 
recalled yesterday something that happened last month, and unconscious or procedural 
memories, such as the learned skills in coordinating my leg muscles while walking. We can 
do studies to show that conscious memories are stored in the hippocampus and higher 
layers of the cerebral cortex, and unconscious memories are stored in the cerebellum. But 
that is not really helpful for AGI design. The important distinction is that we remember 
remembering conscious memories but not unconscious. Reading from conscious memory also 
writes into it. But I can simulate this process in simple programs, for example, a 
database that logs transactions.

So if you can nail down a definition of consciousness without pointing to a 
human, I am willing to listen. Otherwise we default to the possibility of 
building AGI on COMP principles and then ascribing consciousness to it since it 
behaves just like a human.

-- Matt Mahoney, [EMAIL PROTECTED]
  


I am way past merely defining anything. I know what phenomenal fields 
are constructed of: Virtual Nambu Goldstone Bosons. Brain material is 
best regarded as a radically anisotropic quasi-fluid undergoing massive 
phase changes on multiple time scales. The problem is one of 
thermodynamics, not abstract computation. Duplicating the boson 
generation inorganically and applying that process to regulatory 
mechanisms of learning is exactly what I plan for my AGI chips. The 
virtual particles were named Qualeons by some weird guy here that i was 
talking to one day. I forgot is name. I better find that out! I digress. :-)


It would take 3 PhD dissertations to cover everything from quantum 
mechanics to psychology. You have to be a polymath. And to see how they 
explain consciousness you need to internalise 'dual aspect science', 
from which perspective its all obvious. I have to change the whole of 
science from single to dual aspect to make it understood. 

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Ben Goertzel
I agree it is far nicer when advocates of theories are willing to gracefully
entertain constructive criticisms of their theories.

However, historically, I'm not sure it's true that this sort of grace on the
part of a theorist is well-correlated with the ultimate success of that
theorist's theories.

I can think of loads of gracious theorists with poor ideas, and loads of
obnoxious, overly-harsh, self-centered theorists whose ideas have ultimately
proved excellent  ;-p

-- ben g

On Mon, Oct 13, 2008 at 12:53 PM, Jim Bromer [EMAIL PROTECTED] wrote:

 We all know that no body of theories has yet solved the major AI
 problems that confront us at this time.

 I feel that the discussions about methodologies that may be
 theoretically sound or reasonable but not proven to be completely
 effective in application should include discussion about the potential
 problems that might be associated with them.

 If there have been some results gathered through testing, then we
 would all benefit with a discussion of some of the cases where the
 principles did not work properly.  On the other hand, if there have
 not been enough experimental results from application problems to make
 insightful criticisms, then the proprietors and enthusiasts should
 have the intellectual integrity to express some of the reservations
 they may have had about their theories.  This seems like a sensible
 working principle.

 That is not to say that an advocate has to accept all possible
 criticisms as being equal in value.  But when an advocate is not able
 to present or acknowledge any reasonable criticism about his still
 unsubstantiated theories, it looks like a negative indicator about the
 generality and efficacy of those theories.

 I know that we all have to deal with criticisms.  But using a critical
 examination of the theories or criticisms that you are advocating is a
 step higher than just defending your theories from any criticism.

 Jim Bromer


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Jim Bromer
Well, how about privately sending me a few of those names.  I know
that Wittgenstein was pretty obnoxious after WW1, but I don't think
that he made much substantial progress during that time. I think his
most important work was written during the war, in the trenches I
think.  (I may be mistaken.)
Jim Bromer

On Mon, Oct 13, 2008 at 12:57 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 I agree it is far nicer when advocates of theories are willing to gracefully
 entertain constructive criticisms of their theories.

 However, historically, I'm not sure it's true that this sort of grace on the
 part of a theorist is well-correlated with the ultimate success of that
 theorist's theories.

 I can think of loads of gracious theorists with poor ideas, and loads of
 obnoxious, overly-harsh, self-centered theorists whose ideas have ultimately
 proved excellent  ;-p

 -- ben g

 On Mon, Oct 13, 2008 at 12:53 PM, Jim Bromer [EMAIL PROTECTED] wrote:

 We all know that no body of theories has yet solved the major AI
 problems that confront us at this time.

 I feel that the discussions about methodologies that may be
 theoretically sound or reasonable but not proven to be completely
 effective in application should include discussion about the potential
 problems that might be associated with them.

 If there have been some results gathered through testing, then we
 would all benefit with a discussion of some of the cases where the
 principles did not work properly.  On the other hand, if there have
 not been enough experimental results from application problems to make
 insightful criticisms, then the proprietors and enthusiasts should
 have the intellectual integrity to express some of the reservations
 they may have had about their theories.  This seems like a sensible
 working principle.

 That is not to say that an advocate has to accept all possible
 criticisms as being equal in value.  But when an advocate is not able
 to present or acknowledge any reasonable criticism about his still
 unsubstantiated theories, it looks like a negative indicator about the
 generality and efficacy of those theories.

 I know that we all have to deal with criticisms.  But using a critical
 examination of the theories or criticisms that you are advocating is a
 step higher than just defending your theories from any criticism.

 Jim Bromer


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 Nothing will ever be attempted if all possible objections must be first
 overcome   - Dr Samuel Johnson


 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Jim Bromer
On Mon, Oct 13, 2008 at 12:57 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 I agree it is far nicer when advocates of theories are willing to gracefully
 entertain constructive criticisms of their theories.

 However, historically, I'm not sure it's true that this sort of grace on the
 part of a theorist is well-correlated with the ultimate success of that
 theorist's theories.

 I can think of loads of gracious theorists with poor ideas, and loads of
 obnoxious, overly-harsh, self-centered theorists whose ideas have ultimately
 proved excellent  ;-p

 -- ben g

Well, a person who is able to criticize his own theories may seem
obnoxious to people who aren't.

Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Ben Goertzel
Jim,

I really don't have time for a long debate on the historical psychology of
scientists...

To give some random examples though: Newton, Leibniz and Gauss were
certainly obnoxious, egomaniacal pains in the ass though ... Edward Teller
... Goethe, whose stubbornness was largely on-the-mark with his ideas about
morphology, but totally off-the-mark with his theory of color ... Babbage,
who likely would have succeeded at building his difference engine were his
personality less thorny ... etc. etc. etc. etc. etc. ...

I'm certainly not suggesting we AGi researchers should try to be bigger
jerks or less open to constructive criticism .. goodness knows there is
already enough egocentricity on this list ;-) ... I'm just pointing out that
judging ideas by the personality or openness-to-criticism of their creators
is a very dubious heuristic...

ben

On Mon, Oct 13, 2008 at 1:17 PM, Jim Bromer [EMAIL PROTECTED] wrote:

 Well, how about privately sending me a few of those names.  I know
 that Wittgenstein was pretty obnoxious after WW1, but I don't think
 that he made much substantial progress during that time. I think his
 most important work was written during the war, in the trenches I
 think.  (I may be mistaken.)
 Jim Bromer

 On Mon, Oct 13, 2008 at 12:57 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  I agree it is far nicer when advocates of theories are willing to
 gracefully
  entertain constructive criticisms of their theories.
 
  However, historically, I'm not sure it's true that this sort of grace on
 the
  part of a theorist is well-correlated with the ultimate success of that
  theorist's theories.
 
  I can think of loads of gracious theorists with poor ideas, and loads of
  obnoxious, overly-harsh, self-centered theorists whose ideas have
 ultimately
  proved excellent  ;-p
 
  -- ben g
 
  On Mon, Oct 13, 2008 at 12:53 PM, Jim Bromer [EMAIL PROTECTED]
 wrote:
 
  We all know that no body of theories has yet solved the major AI
  problems that confront us at this time.
 
  I feel that the discussions about methodologies that may be
  theoretically sound or reasonable but not proven to be completely
  effective in application should include discussion about the potential
  problems that might be associated with them.
 
  If there have been some results gathered through testing, then we
  would all benefit with a discussion of some of the cases where the
  principles did not work properly.  On the other hand, if there have
  not been enough experimental results from application problems to make
  insightful criticisms, then the proprietors and enthusiasts should
  have the intellectual integrity to express some of the reservations
  they may have had about their theories.  This seems like a sensible
  working principle.
 
  That is not to say that an advocate has to accept all possible
  criticisms as being equal in value.  But when an advocate is not able
  to present or acknowledge any reasonable criticism about his still
  unsubstantiated theories, it looks like a negative indicator about the
  generality and efficacy of those theories.
 
  I know that we all have to deal with criticisms.  But using a critical
  examination of the theories or criticisms that you are advocating is a
  step higher than just defending your theories from any criticism.
 
  Jim Bromer
 
 
  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 
 
 
  --
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]
 
  Nothing will ever be attempted if all possible objections must be first
  overcome   - Dr Samuel Johnson
 
 
  
  agi | Archives | Modify Your Subscription


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Charles Hixson

Ben Goertzel wrote:


Jim,

I really don't have time for a long debate on the historical 
psychology of scientists...


To give some random examples though: Newton, Leibniz and Gauss were 
certainly obnoxious, egomaniacal pains in the ass though ... Edward 
Teller ... Goethe, whose stubbornness was largely on-the-mark with his 
ideas about morphology, but totally off-the-mark with his theory of 
color ... Babbage, who likely would have succeeded at building his 
difference engine were his personality less thorny ... etc. etc. etc. 
etc. etc. ...


...
ben


...

Galileo, Bruno of Nolan, etc.
OTOH, Paracelsus was quite personable.  So was, reputedly, Pythagoras.  
(No good evidence on Pythagoras, though.  Only stories from 
supporters.)  (Also, consider that the Pythagoreans, possibly including 
Pythagoras, had a guy put to death for discovering that sqrt(2) was 
irrational.  [As with most things from this date, this is more legend 
than fact, but is quite probable.])


As a generality, with many exceptions, strongly opinionated persons are 
not easy to get along with unless you agree with their opinions.  It 
appears to be irrelevant whether their opinions are right, wrong, or 
undecidable.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Jim Bromer
On Mon, Oct 13, 2008 at 2:34 PM, Charles Hixson
[EMAIL PROTECTED] wrote:
 Galileo, Bruno of Nolan, etc.
 OTOH, Paracelsus was quite personable.  So was, reputedly, Pythagoras.  (No
 good evidence on Pythagoras, though.  Only stories from supporters.)  (Also,
 consider that the Pythagoreans, possibly including Pythagoras, had a guy put
 to death for discovering that sqrt(2) was irrational.  [As with most things
 from this date, this is more legend than fact, but is quite probable.])

 As a generality, with many exceptions, strongly opinionated persons are not
 easy to get along with unless you agree with their opinions.  It appears to
 be irrelevant whether their opinions are right, wrong, or undecidable.


I just want to comment that my original post was not about
agreeableness. It was about the necessity of being capable of
criticizing your own theories (and criticisms).  I just do not believe
that Newton, Galileo, Pythagoras and the rest of them were incapable
of examining their own theories from critical vantage points even
though they may have not accepted the criticisms others derived from
different vantage points.  As I said, there is no automatic equality
for criticisms.  Just because a theory is unproven it does not mean
that all criticisms have to be accepted as equally valid.

But when you see someone, theorist or critic, who almost never
demonstrates any genuine capacity for reexamining his own theories or
criticisms from any critical vantage point what so ever, then it's a
strong negative indicator.

Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Colin Hales



Jim Bromer wrote:

On Mon, Oct 13, 2008 at 2:34 PM, Charles Hixson
[EMAIL PROTECTED] wrote:
  

Galileo, Bruno of Nolan, etc.
OTOH, Paracelsus was quite personable.  So was, reputedly, Pythagoras.  (No
good evidence on Pythagoras, though.  Only stories from supporters.)  (Also,
consider that the Pythagoreans, possibly including Pythagoras, had a guy put
to death for discovering that sqrt(2) was irrational.  [As with most things
from this date, this is more legend than fact, but is quite probable.])

As a generality, with many exceptions, strongly opinionated persons are not
easy to get along with unless you agree with their opinions.  It appears to
be irrelevant whether their opinions are right, wrong, or undecidable.




I just want to comment that my original post was not about
agreeableness. It was about the necessity of being capable of
criticizing your own theories (and criticisms).  I just do not believe
that Newton, Galileo, Pythagoras and the rest of them were incapable
of examining their own theories from critical vantage points even
though they may have not accepted the criticisms others derived from
different vantage points.  As I said, there is no automatic equality
for criticisms.  Just because a theory is unproven it does not mean
that all criticisms have to be accepted as equally valid.

But when you see someone, theorist or critic, who almost never
demonstrates any genuine capacity for reexamining his own theories or
criticisms from any critical vantage point what so ever, then it's a
strong negative indicator.

Jim Bromer


  
The process of formulation of scientific theories has been characterised 
as a dynamical system nicely by Nicholas Rescher.


Rescher, N., Process philosophy : a survey of basic issues, University 
of Pittsburgh Press, Pittsburgh, 2000, p. 144.
Rescher, N., Nature and understanding : the metaphysics and method of 
science, Clarendon Press, Oxford, 2000, pp. ix, 186.


In that approach you can see critical argument operating operating as a 
brain process - competing brain electrodynamics that stabilises on the 
temporary 'winner', whose position may be toppled at any moment by the 
emergence of a more powerful criticism which destabilises the current 
equilibrium...and so on. The 'argument' may involve the provision of 
empirical evidence ... indeed that is the norm for most sciences.


In order that a discipline be seen to be real science, then, what one 
would expect to see such processes happening in a dialog between a 
diversity of views competing for ownership of scientific evidence 
through support for whatever theoretical framework seems apt. As a 
recent entrant here, and seeing the dialog and the issues as they unfold 
I would have some difficulty classifying what is going on as 
'scientific' in the sense that there is no debate calibrated against any 
overt fundamental scientific theoretical framework(s), nor defined 
testing protocols.


In the wider world of science it is the current state of play that the 
theoretical basis for real AGI is an open and multi-disciplinary 
question.  A forum that purports to be invested in achievement of real 
AGI as a target, one would expect that forum to a multidisciplianry 
approach on many fronts, all competing scientifically for access to real 
AGI.


I am not seeing that here. In having a completely different approach to 
AGI, I hope I can contribute to the diversity of ideas and bring the 
discourse closer to that of a solid scientific discipline, with formal 
testing metrics and so forth. I hope that I can attract the attention of 
the neuroscience and physics world to this area.


Of course whether I'm an intransigent grumpy theory-zealot of the 
Newtonian kind... well... just let the ideas speak for themselves... 
:-)  The main thing is the diversity of ideas and criticism .. which 
seems a little impoverished at the moment. Without the diversity of 
approaches actively seen to compete, an AGI forum will end up 
marginalised as a club of some kind: We do (what we assume will be) AGI 
by fiddling about with XYZ. This is scientific suicide.


Here's a start:: the latest survey in the key area. Like it or not AGI 
is directly in the running for solving the 'hard problem' and machine 
consciousness is where the game is at.


Gamez, D. 'Progress in machine consciousness', Consciousness and 
Cognition vol. 17, no. 3, 2008. 887-910.


I'll do my best to diversify the discourse... I'd like to see this 
community originate real AGI and be seen as real science. To do that 
this forum should attract cognitive scientists, psychologists, 
physicists, engineers, neuroscientists. Over time, maybe we can get that 
sort of diversity happening.  I have enthusiasm for such things..


cheers
colin hales






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Mike Tintner
Colin,

Yes you and Rescher are going in a good direction, but you can make it all 
simpler still, by being more specific..

We can take it for granted that we're talking here mainly about whether 
*incomplete* creative works should be criticised.

If we're talking about scientific theories, then basically we're talking in 
most cases about detective theories, about theories of whodunit or whatdunit. 
If you've got an incomplete theory about who committed a murder, because you 
don't have enough evidence, or enough of a motive - do you want criticism? In 
general, you'd be pretty foolish not to seek it. Others may point out evidence 
you've missed, or other motives, or suggest still better suspects.

If we're talking about inventions, then we're talking about tools/ machines/ 
engines etc designed to produce certain effects. If you've got an incomplete 
machine, it doesn't achieve the effect as desired. It isn't as efficient or as 
effective as you want. Should you seek criticism? In general, you'd still be 
pretty foolish not to. Others may point out improved ways of designing or 
moving your machine parts,  or of arranging the objects-to-be-moved.

And if nothing else the simple act of presenting your ideas to others allows 
you to use them as sounding-boards - you get to hear your ideas with a clarity 
that is difficult to achieve alone, and become more aware of their 
deficiiencies - and more motivated to solve them.

The difficulty with AGI is that we're dealing not with machines or software 
that are incomplete but simply non-functioning - with essentially narrow AI 
systems that haven't shown any capacity for general intelligence and 
problemsolving - with machines that want to be airplanes, but are actually 
motorbikes, and have never taken off, or shown any ability to get off the 
ground for even a few seconds. As a result, you have a whole culture where 
people are happy to tell you how their machine works - the kind of engine or in 
this case software that they're using - but not happy to tell you what their 
machine does - what specific problems it addresses - because that will 
highlight their complete failure so far.

Is that sensible? If you want to preserve your dignity, yes. Acknowledging 
failure is v. painful and humiliating. Plus, in this case, there's the v. 
serious possbility that you're building totally the wrong machine a motorbike 
that will never be a plane, (or a narrow plane that will never be a general 
bird) - or in this case, software that simply doesn't and can't address the 
right problems at all. If you actually want to get somewhere, though, and not 
remain trapped in errors, then not presenting and highlighting what your 
machine does (and how it fails) is also foolish.




  Colin:
  The process of formulation of scientific theories has been characterised as a 
dynamical system nicely by Nicholas Rescher.

  Rescher, N., Process philosophy : a survey of basic issues, University of 
Pittsburgh Press, Pittsburgh, 2000, p. 144.
  Rescher, N., Nature and understanding : the metaphysics and method of 
science, Clarendon Press, Oxford, 2000, pp. ix, 186.

  In that approach you can see critical argument operating operating as a brain 
process - competing brain electrodynamics that stabilises on the temporary 
'winner', whose position may be toppled at any moment by the emergence of a 
more powerful criticism which destabilises the current equilibrium...and so on. 
The 'argument' may involve the provision of empirical evidence ... indeed that 
is the norm for most sciences.

  In order that a discipline be seen to be real science, then, what one would 
expect to see such processes happening in a dialog between a diversity of views 
competing for ownership of scientific evidence through support for whatever 
theoretical framework seems apt. As a recent entrant here, and seeing the 
dialog and the issues as they unfold I would have some difficulty classifying 
what is going on as 'scientific' in the sense that there is no debate 
calibrated against any overt fundamental scientific theoretical framework(s), 
nor defined testing protocols. 

  In the wider world of science it is the current state of play that the 
theoretical basis for real AGI is an open and multi-disciplinary question.  A 
forum that purports to be invested in achievement of real AGI as a target, one 
would expect that forum to a multidisciplianry approach on many fronts, all 
competing scientifically for access to real AGI. 

  I am not seeing that here. In having a completely different approach to AGI, 
I hope I can contribute to the diversity of ideas and bring the discourse 
closer to that of a solid scientific discipline, with formal testing metrics 
and so forth. I hope that I can attract the attention of the neuroscience and 
physics world to this area. 

  Of course whether I'm an intransigent grumpy theory-zealot of the Newtonian 
kind... well... just let the ideas speak for themselves... :-)  The 

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Colin Hales

Mike Tintner wrote:

Colin,
 
Yes you and Rescher are going in a good direction, but you can make it 
all simpler still, by being more specific..
 
We can take it for granted that we're talking here mainly about 
whether *incomplete* creative works should be criticised.
 
If we're talking about scientific theories, then basically we're 
talking in most cases about detective theories, about theories of 
whodunit or whatdunit. If you've got an incomplete theory about who 
committed a murder, because you don't have enough evidence, or enough 
of a motive - do you want criticism? In general, you'd be pretty 
foolish not to seek it. Others may point out evidence you've missed, 
or other motives, or suggest still better suspects.
 
If we're talking about inventions, then we're talking about tools/ 
machines/ engines etc designed to produce certain effects. If you've 
got an incomplete machine, it doesn't achieve the effect as desired. 
It isn't as efficient or as effective as you want. Should you seek 
criticism? In general, you'd still be pretty foolish not to. Others 
may point out improved ways of designing or moving your machine 
parts,  or of arranging the objects-to-be-moved.
 
And if nothing else the simple act of presenting your ideas to others 
allows you to use them as sounding-boards - you get to hear your ideas 
with a clarity that is difficult to achieve alone, and become more 
aware of their deficiiencies - and more motivated to solve them.
 
The difficulty with AGI is that we're dealing not with machines or 
software that are incomplete but simply non-functioning - with 
essentially narrow AI systems that haven't shown any capacity for 
general intelligence and problemsolving - with machines that want to 
be airplanes, but are actually motorbikes, and have never taken off, 
or shown any ability to get off the ground for even a few seconds. As 
a result, you have a whole culture where people are happy to tell you 
how their machine works - the kind of engine or in this case software 
that they're using - but not happy to tell you what their machine does 
- what specific problems it addresses - because that will highlight 
their complete failure so far.
 
Is that sensible? If you want to preserve your dignity, yes. 
Acknowledging failure is v. painful and humiliating. Plus, in this 
case, there's the v. serious possbility that you're building totally 
the wrong machine a motorbike that will never be a plane, (or a narrow 
plane that will never be a general bird) - or in this case, software 
that simply doesn't and can't address the right problems at all. If 
you actually want to get somewhere, though, and not remain trapped in 
errors, then not presenting and highlighting what your machine does 
(and how it fails) is also foolish.
 
 

You paint a very human face on the process... but I don't understand how 
such things as 'painful' and 'humiliating' and 'mistaken' etc have any 
formal role ... although I can see how it can operate in reality...


Critique of incomplete works only makes sense if you can tell when you 
are complete! There's no agreed standard by which such a state can be 
judged. That being the case, then critique of incompleteness is all that 
is ever going to happen! But that is a side-bar here.


I do not see the process of proving that method-X did not work as a 
humiliation or a mistake. That's the whole point of scientific method. 
You posit if method-X works, then it should behave THUS. Then you test 
it. If it works, you have merely minimised doubt in method-X, you 
haven't located any ultimate truth. If it fails then you have maximised 
doubt that method-X is valid. Both are admirable outcomes, equally 
useful in the scheme of things.


Look at Edison and the apocryphal 4000 attempts to make a light bulb. He 
proved 3999 ways NOT to make a light bulb! Where's the humiliation? :-) 
Seriously - if there is real science going on, the whole thing is lined 
with scientific success, no matter what the outcome! Failure is part of 
the system... we're here to seek truth and as long as that process is 
going on - there's no humiliation...


If you ain't failed you haven't tried... as mah granpappy yewsd ta 
say.. :-)


In the case of making choices that will result in AGI, so far I see on 
one in this AGI grouping, COMP is the universal assumption. All eggs are 
in 1 basket. There is nothing wrong with continuing the 50 year non-stop 
run of failure (a la lightbulbs) to make AGI based on COMP 
principles.a very noble sacrifice...but hasn't 50 years of failure 
generated enough doubt in COMP that perhaps non-COMP approaches might be 
actively sought by a community seeking AGI?I'm suggesting that the 
discourse be broadened to include other options and thereby attract the 
attention of other workers in the fields of machine learning, machine 
consciousness, scientific study of consciousness, cognitive science etc 
etc - most of which do not assume COMP, but are open to 

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Ben Goertzel
 But when you see someone, theorist or critic, who almost never
 demonstrates any genuine capacity for reexamining his own theories or
 criticisms from any critical vantage point what so ever, then it's a
 strong negative indicator.

 Jim Bromer



I would be hesitant to draw strong conclusions about someone's internal
thought processes from their conversational behavior, though.

Some people may put on a show of defensiveness but secretly be
highly, and adeptly, self-critical

Others may put on a show of self-doubt and self-criticism, yet actually
be more dogmatic and inflexible than anyone...

I doubt very much that anyone capable of coming up with an interesting
theory is actually incapable of reexamining their own theories from a
critical
vantage point ...

Also, some people feel differently than you, and are negative to folks who
criticize their own theories -- with an attitude such as Hmmph.  If even
**you** don't fully believe your own theories, why would you expect
anyone else to??

Personally, I have swung between extremes of excessive self-doubt
and excessive self-confidence many times ... but one way or another,
I've kept pushing ahead hard with the work, regardless of the emotional
fluctuations my limbic system may cook up...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Matt Mahoney
--- On Mon, 10/13/08, Colin Hales [EMAIL PROTECTED] wrote:

 In the wider world of science it is the current state of play that the
theoretical basis for real AGI is an open and multi-disciplinary
question.  A forum that purports to be invested in achievement of real
AGI as a target, one would expect that forum to a multidisciplianry
approach on many fronts, all competing scientifically for access to
real AGI. 

I think this group is pretty diverse. No two people here can agree on how to 
build AGI.

 Gamez, D. 'Progress in machine consciousness', Consciousness and
Cognition vol. 17, no. 3, 2008. 887-910.

$31.50 from Science Direct. I could not find a free version. I don't understand 
why an author would not at least post their published papers on their personal 
website. It greatly increases the chance that their paper is cited. I 
understand some publications require you to give up your copyright including 
your right to post your own paper. I refuse to publish with them.

(I don't know the copyright policy for Science Direct, but they are really 
milking the publish or perish mentality of academia. Apparently you pay to 
publish with them, and then they sell your paper).

In any case, I understand you have a pending paper on machine consciousness. 
Perhaps you could make it available. I don't believe that consciousness is 
relevant to intelligence, but that the appearance of consciousness is. Perhaps 
you can refute my position.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Jim Bromer
On Mon, Oct 13, 2008 at 8:06 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 Personally, I have swung between extremes of excessive self-doubt
 and excessive self-confidence many times ... but one way or another,
 I've kept pushing ahead hard with the work, regardless of the emotional
 fluctuations my limbic system may cook up...

 -- Ben G

There is a difference between 'self-doubt' and the capability to
criticize your own theories.  To be able to criticize your own
theories, (or those theories you are arguing for) you only have to be
able to examine them from a number of different vantages - including
those that may be critical.

I agree that it is unlikely that anyone is completely unable to
examine his own theories from different perspectives and that is why I
believe this capability is a fundamental method of intelligence.  On
the other hand, I believe that people who are either overly defensive
or who just cannot appreciate the possibility that some of their
theories (or criticisms) may not be as sound and extensive as they
believe them still utilize this multiple vantage perspective (that I
believe is innate) by continually refocusing their attention onto
those manifestations of their theories that they believe they have
found.  So, at worse, a closed minded person can spend years and years
honing an argument that may have little beneficial effect on his life
or on his world, but he does so by using the same tools that more
successful people seem to use.

This opinion can be reduced to the point to make it seem too obvious
to bother with.  But I believe that it has important implications for
advanced AI research.

This process of reexamining a system of theories, and then focusing on
particular aspects of those theories, can be very effective but it can
also produce a great deal of ineptness just as easily.  That is why
empirical methods are so important.  But then you run into the problem
that there is a roughly inverse relationship between the establishment
of  feasible objectives that can be  used to measure success in
developing AGI and the range of generality that can be established by
their use.

Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Colin Hales

Hi Matt,
... The Gamez paper situation is now...erm...resolved. You are right: 
the paper doesn't argue that solving consciousness is necessary for AGI. 
What has happened recently is a subtle shift  - those involved simple 
fail to make claims about the consciousness or otherwise of the 
machines! This does not entail that they are not actually working on it. 
They are just being cautious...Also, you correctly observe that solving 
AGI on a purely computational basis is not prohibited by the workers 
involved in the GAMEZ paper.. indeed most of their work assumes it!... I 
don't have a problem with this...However...'attributing' consciousness 
to it based on its behavior is probably about as unscientific as it 
gets. That outcome betrays no understanding whatever of consciousness, 
its mechanism or its roleand merely assumes COMP is true and creates 
an agreement based on ignorance. This is fatally flawed non-science.


[BTW: We need an objective test (I have one - I am waiting for it to get 
published...). I'm going to try and see where it's at in that process. 
If my test is acceptable then I predict all COMP entrants will fail, but 
I'll accept whatever happens... - and external behaviour is decisive. 
Bear with me a while till I get it sorted.]


I am still getting to know the folks [EMAIL PROTECTED] And the group may be 
diverse, as you say ... but if they are all COMP, then that diversity is 
like a group dedicated to an unresolved argument over the colour of a 
fish's bicycle. If we can attract the attention of the likes of those in 
the GAMEZ paper... and others such as Hynna and Boahen at Stanford, who 
have an unusual hardware neural architecture...(Hynna, K. M. and Boahen, 
K. 'Thermodynamically equivalent silicon models of voltage-dependent ion 
channels', /Neural Computation/ vol. 19, no. 2, 2007. 327-350.)...and 
others ... then things will be diverse and authoritative. In particular, 
those who have recently essentially squashed the computational theories 
of mind from a neuroscience perspective- the 'integrative neuroscientists':


Poznanski, R. R., Biophysical neural networks : foundations of 
integrative neuroscience, Mary Ann Liebert, Larchmont, NY, 2001, pp. 
viii, 503 p.


Pomerantz, J. R., Topics in integrative neuroscience : from cells to 
cognition, Cambridge University Press, Cambridge, UK ; New York, 2008, 
pp. xix, 427 p.


Gordon, E., Ed. (2000). Integrative neuroscience : bringing together 
biological, psychological and clinical models of the human brain. 
Amsterdam, Harwood Academic.


The only working, known model of general intelligence is the human. If 
we base AGI on anything that fails to account scientifically and 
completely for /all/ aspects of human cognition, including 
consciousness, then we open ourselves to critical inferiority... and the 
rest of science will simply find the group an irrelevant cultish 
backwater. Strategically the group would do well to make choices that 
attract the attention of the 'machine consciousness' crowd - they are 
directly linked to neuroscience via cog sci. The crowd that runs with 
JETAI (journal of theoretical and experimental artificial intelligence) 
is also another relevant one. It'd be nice if those people also saw the 
AGI journal as a viable repository for their output. I for one will try 
and help in that regard. Time will tell I suppose.


cheers,
colin hales


Matt Mahoney wrote:

--- On Mon, 10/13/08, Colin Hales [EMAIL PROTECTED] wrote:

  

In the wider world of science it is the current state of play that the


theoretical basis for real AGI is an open and multi-disciplinary
question.  A forum that purports to be invested in achievement of real
AGI as a target, one would expect that forum to a multidisciplianry
approach on many fronts, all competing scientifically for access to
real AGI. 


I think this group is pretty diverse. No two people here can agree on how to 
build AGI.

  

Gamez, D. 'Progress in machine consciousness', Consciousness and


Cognition vol. 17, no. 3, 2008. 887-910.

$31.50 from Science Direct. I could not find a free version. I don't understand 
why an author would not at least post their published papers on their personal 
website. It greatly increases the chance that their paper is cited. I 
understand some publications require you to give up your copyright including 
your right to post your own paper. I refuse to publish with them.

(I don't know the copyright policy for Science Direct, but they are really milking the 
publish or perish mentality of academia. Apparently you pay to publish with 
them, and then they sell your paper).

In any case, I understand you have a pending paper on machine consciousness. 
Perhaps you could make it available. I don't believe that consciousness is 
relevant to intelligence, but that the appearance of consciousness is. Perhaps 
you can refute my position.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Ben Goertzel
Colin wrote:

 The only working, known model of general intelligence is the human. If we
 base AGI on anything that fails to account scientifically and completely for
 *all* aspects of human cognition, including consciousness, then we open
 ourselves to critical inferiority... and the rest of science will simply
 find the group an irrelevant cultish backwater. Strategically the group
 would do well to make choices that attract the attention of the 'machine
 consciousness' crowd - they are directly linked to neuroscience via cog sci.



Actually, I very strongly disagree with the above.

While I am an advocate of machine consciousness research, and will be
co-organizing a machine consciousness workshop in Hong Kong in June 2009, I
do **not** agree that focusing on machine consciousness would be likely to
help AGI to get better accepted in the general scientific community.

Rather, I think that consciousness research is currently considered at least
as eccentric as AGI research, by the scientific mainstream ... and is
considered far MORE eccentric than AGI research by the AI research
mainstream, e.g. the AAAI.

So, discussing issues of machine consciousness may be interesting and very
worthwhile for AGI in some scientific and conceptual... but I really really
don't think that, at the present time, more closely allying AGI with machine
consciousness would do anything but cause trouble for AGI's overall
scientific reputation.

Frankly I think that machine consciousness has at least as high a chance
of being considered an irrelevant cultish backwater than AGI ... though I
don't think that either field deserves that fate.

Comparing the two fields, I note that AGI has a larger and more active
series of conferences than machine consciousness, and is also ... pathetic
as it may be ... better-funded overall ;-p 

Regarding the connection to neuroscience and cog sci: obviously, AGI does
not need machine consciousness as an intermediary to connect to those
fields, it is already closely connected.  As one among many examples, Stan
Franklin's LIDA architecture, a leading AGI approach, was originated in
collaboration with Bernard Baars, a leading cognitive psychologist (and
consciousness theorist, as it happens).  And we had a session on AGI and
Neuroscience at AGI-08, chaired by neuroscientist Randal Koene.

I laid out my own thoughts on consciousness in some detail in The Hidden
Pattern ... I'm not trying to diss consciousness research at all ... just
pointing out that the posited reason for tying it in with AGI seems not to
be correct...


-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Colin Hales

Ben Goertzel wrote:


Colin wrote:

The only working, known model of general intelligence is the
human. If we base AGI on anything that fails to account
scientifically and completely for /all/ aspects of human
cognition, including consciousness, then we open ourselves to
critical inferiority... and the rest of science will simply find
the group an irrelevant cultish backwater. Strategically the group
would do well to make choices that attract the attention of the
'machine consciousness' crowd - they are directly linked to
neuroscience via cog sci.


Actually, I very strongly disagree with the above.

While I am an advocate of machine consciousness research, and will be 
co-organizing a machine consciousness workshop in Hong Kong in June 
2009, I do **not** agree that focusing on machine consciousness would 
be likely to help AGI to get better accepted in the general scientific 
community.


Rather, I think that consciousness research is currently considered at 
least as eccentric as AGI research, by the scientific mainstream ... 
and is considered far MORE eccentric than AGI research by the AI 
research mainstream, e.g. the AAAI.


So, discussing issues of machine consciousness may be interesting and 
very worthwhile for AGI in some scientific and conceptual... but I 
really really don't think that, at the present time, more closely 
allying AGI with machine consciousness would do anything but cause 
trouble for AGI's overall scientific reputation.


Frankly I think that machine consciousness has at least as high a 
chance of being considered an irrelevant cultish backwater than AGI 
... though I don't think that either field deserves that fate.


Comparing the two fields, I note that AGI has a larger and more active 
series of conferences than machine consciousness, and is also ... 
pathetic as it may be ... better-funded overall ;-p   

Regarding the connection to neuroscience and cog sci: obviously, AGI 
does not need machine consciousness as an intermediary to connect to 
those fields, it is already closely connected.  As one among many 
examples, Stan Franklin's LIDA architecture, a leading AGI approach, 
was originated in collaboration with Bernard Baars, a leading 
cognitive psychologist (and consciousness theorist, as it happens).  
And we had a session on AGI and Neuroscience at AGI-08, chaired by 
neuroscientist Randal Koene.


I laid out my own thoughts on consciousness in some detail in The 
Hidden Pattern ... I'm not trying to diss consciousness research at 
all ... just pointing out that the posited reason for tying it in with 
AGI seems not to be correct...



-- Ben G
My main impression of the AGI-08 forum was one of over-dominance by 
singularity-obsessed  and COMP thinking, which must have freaked me out 
a bit. The IEEE Spectrum articles on the 'singularity rapture' did 
nought to improve my outlook... Thanks for bringing the Stan Franklin 
and Bernhard Baars/Global Workspace etc and neuroscience links to my 
attention. I am quite familiar with them and it's a relief to see they 
connect with the AGI fray. Hopefully the penetration of these 
disciplines, and their science, will grow.


In respect of our general consciousness-in-AGI disagreement: Excellent! 
That disagreement is a sign of diversity of views. Bring it on!


The only reason for not connecting consciousness with AGI is a situation 
where one can see no mechanism or role for it. That inability is no 
proof there is noneand I have both to the point of having a patent 
in progress.  Yes, I know it's only my claim at the moment...but it's 
behind why I believe the links to machine consciousness are not 
optional, despite the cultural state/history of the field at the moment 
being less than perfect and folks cautiously sidling around 
consciousness like it was bomb under their budgets.


So...You can count on me for vigorous defense of my position from 
quantum physics upwards to psychology, including support for machine 
consciousness as being on the critical path to AGI.  Hopefully in June 
'09? ;-)


I tried to locate a local copy of 'the hidden pattern'...no luck. Being 
in poverty-stricken student mode, at the moment...I have to survive on 
library/online resources, which are pretty impressive here at 
Unimelb..but despite this the libraries around here don't have 
it...two other titles in the state library... but not that one..Oh well. 
Maybe send me a copy with my wizard hat? :-P


cheers,
colin hales




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com