Re: [agi] CEMI Field

2008-01-25 Thread Günther Greindl

Hi Richard,

It says, in effect Hey, the explanation of consciousness is that it is 
caused by X where X is something that explains absolutely nothing about 
whatever consciousness is supposed to be.  Moreover, the person 
espousing the theory, you can bet, will not be able to state exactly 
what they think consciousness actually is... they will just be able to 
tell you that, whatever it is, their candidate explains it.


I know why you are being sceptical - my initial reactions to theories of 
consciousness are usually the same, because people who propose them 
usually have ulterior motives (special status of humans, dualism, 
religion whatever..)


I don't think that this scientist has these motives, as he is strictly 
on the physicalist side. BTW, he also writes a bit about free will, I 
disagree with him on this notion as I can not see where free will could 
enter a physicalist picture of reality. (But that is anther controversy ;-)


The thing is this: consciousness (the basic phenomenon of awareness) 
needs explaining, and I believe science can explain it in physicalist way.


I could just as easily say that consciousness is explained by ... hair 
follicles.  This Hair Follicle Theory of Consciousness would have the 
same qualifications to be considered the correct theory.


Well no - because, esp. via brain lesions etc, we can fairly definitly 
locate consciousness _in_ the brain (or the whole brain) - but not 
_outside_ the brain.


So we just have to look _where_ in the brain this happens. We can now 
endorse modular theories or comprehensive ones, but for this is not very 
satisfactory: consciousness is being felt as a unity, and I can't quite 
see how individual neurons firing can lead to a unified feeling: this 
also goes for synchronous firing, because neuron A does not know that B 
fires syncronously, so how could it make a difference at the neural 
level (I hope you know what I mean, I can elaborate).


But the EM field is a unity - the field is caused by the summary of 
_all_ neurons in the brain, and it is also _caused_ by the electric 
potentials in all neurons.


Also, the EM field is perfectly physicalist and does not invoke QM 
mysteries (a la Hameroff/Penrose which I find bogus and has been 
quantitatively refuted by Max Tegmark)


There are more theories of consciousness of this sort than you can swing 
a cat at.  Go to the Tucson Conference in a few months' time and you 
will be able to listen to at least fifty of them.


Yes I know - and most of them a pure bogus at first sight - but I do not 
see how this goes for this theory.
We have four fources: gravity, weak and strong nuclear force, and em 
force. I think em is the only force which is a candidate for explaining 
consciousness.


If you do not want to locate consciousness in any of the forces, then what?

Of course, you can say it is an emergent property, but this usually 
just begs the question.


I am not yet hooked to the EM-field theory. But it is the most 
interesting candidate I have seen in a long time.


Regards,
Günther

--
Günther Greindl
Department of Philosophy of Science
University of Vienna
[EMAIL PROTECTED]
http://www.univie.ac.at/Wissenschaftstheorie/

Blog: http://dao.complexitystudies.org/
Site: http://www.complexitystudies.org

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89877960-1ebb1b


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-25 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:

This whole scenario is filled with unjustified, unexamined assumptions.

For example, you suddenly say I foresee a problem when the collective 
computing power of the network exceeds the collective computing power of 
the humans that administer it.  Humans will no longer be able to keep up 
with the complexity of the system...


Do you mean collective intelligence?  Because if you mean collective 
computing power I cannot see what measure you are using (my laptop has 
greater computing power than me already, because it can do more 
arithmetic sums in one second than I have done in my life so far).  And 
either way, this comes right after a great big AND THEN A MIRACLE 
HAPPENS step ...!  You were talking about lots of dumb, specialized 
agents distributed around the world, and then all of a sudden you start 
talking as if they could be intelligent.  Why should anyone believe they 
would spontaneously do that?  First they are agents, then all of a 
sudden they are AGIs and they leave us behind:  I see no reason to allow 
that step in the argument.


In short, it looks like an even bigger non-sequiteur than before.


Yes, I mean collective intelligence.  The miracle is that any interface to
the large network of simple machines will appear intelligent, in the same way
that Google can make a person appear to know a lot more than they do.  It is
hard to predict what this collective intelligence will do, in the same way as
it is hard to predict human behavior by studying individual neurons.

I don't know if my outline for an infrastructure for AGI will be built as I
designed it, but I believe something like it WILL be built, probably ad-hoc
and very complex, because it has economic value.


This argument is *exactly* the same as an old, old argument that 
appeared in science fiction stories back in the early 20th century: 
some people believed that the telephone network might get one connection 
too many and suddenly wake up and be intelligent.


I do not believe you have any more justification for assuming that a set 
of dumb computers will suddenly become more than the sum of thir 
collective dumbness.


The brain consists of many dumb neurons that, collectively, make 
something intelligent.  But it is not the mere fact of them being all in 
the same place at the same time that makes the collective intelligent, 
it is their organization.  Organization is everything.  You must 
demonstrate some reason why the collective net of dumb computers will be 
intelligent:  it is not enough to simply assert that they will, or 
might, become intelligent.


If you had some specific line of reasoning to show that the right 
organization could be given to them, then I will show you that the same 
organization will be put into some other set of computers, deliberately, 
under the control of the factors that I previously described, and that 
this will happen long before the general network gets that organization.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89898115-135d06


Re: [agi] CEMI Field

2008-01-25 Thread Mike Tintner

Gunther: we can fairly definitly
locate consciousness _in_ the brain (or the whole brain) - but not
_outside_ the brain.

Except that the brain isn't sentient, itself. And evolutionarily, the brain 
is a somewhat belated  development/ centralisation of a distributed nervous 
system, no?  The brain is essential for consciousness, but does not 
necessarily contain consciousness?



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89893750-de88ca


Re: [agi] CEMI Field

2008-01-25 Thread Richard Loosemore

Günther Greindl wrote:

Hi Richard,

It says, in effect Hey, the explanation of consciousness is that it 
is caused by X where X is something that explains absolutely nothing 
about whatever consciousness is supposed to be.  Moreover, the person 
espousing the theory, you can bet, will not be able to state exactly 
what they think consciousness actually is... they will just be able 
to tell you that, whatever it is, their candidate explains it.


I know why you are being sceptical - my initial reactions to theories of 
consciousness are usually the same, because people who propose them 
usually have ulterior motives (special status of humans, dualism, 
religion whatever..)


I don't think that this scientist has these motives, as he is strictly 
on the physicalist side. BTW, he also writes a bit about free will, I 
disagree with him on this notion as I can not see where free will could 
enter a physicalist picture of reality. (But that is anther controversy ;-)


The thing is this: consciousness (the basic phenomenon of awareness) 
needs explaining, and I believe science can explain it in physicalist way.


I take a very similar position, in some ways.  I have actually gone so 
far as to build a theory that I believe *does* address the questions of 
what we mean by consciousness, as well as the further question o what 
it actually is.  (I gave this as a poster presentation at the last 
Tucson conference).  My conclusion is a strange one that does admit that 
there is a thing called consciousness - there is definitely something 
that needs to be explained - but at the same time I believe it has a 
kind of unique status, so the physicalist/dualist controversy becomes 
finessed.


My only problem with people like the one you cite is that they often 
declare that consciousness is X without being clear about what they 
really think consciousness is.  I agree that some of them have 
ulterior motives, but I would be willing to forgive them that, if only 
they would start by being clear about what the C-word actually means :-).


I am (of course!) pushing my own theory a bit here, because I believe 
that what happens when you try to really pin down the concept of C is 
that, in fact, the next question gets answered almost automatically.




I could just as easily say that consciousness is explained by ... hair 
follicles.  This Hair Follicle Theory of Consciousness would have the 
same qualifications to be considered the correct theory.


Well no - because, esp. via brain lesions etc, we can fairly definitly 
locate consciousness _in_ the brain (or the whole brain) - but not 
_outside_ the brain.


So we just have to look _where_ in the brain this happens. We can now 
endorse modular theories or comprehensive ones, but for this is not very 
satisfactory: consciousness is being felt as a unity, and I can't quite 
see how individual neurons firing can lead to a unified feeling: this 
also goes for synchronous firing, because neuron A does not know that B 
fires syncronously, so how could it make a difference at the neural 
level (I hope you know what I mean, I can elaborate).


Oh yes, I know exactly what you mean:  well put.



But the EM field is a unity - the field is caused by the summary of 
_all_ neurons in the brain, and it is also _caused_ by the electric 
potentials in all neurons.


Also, the EM field is perfectly physicalist and does not invoke QM 
mysteries (a la Hameroff/Penrose which I find bogus and has been 
quantitatively refuted by Max Tegmark)


There are some problems with simply saying that C is lcoated in the 
brain:  mostly these problems have to do with slippage of the meaning of 
C from the hard-problem version (the problem of explaining pure 
subjectivity) to the being awake version.  That was certaily the 
biggest problem at the talks I saw at the Tucson conference:  many of 
the neuroscentists would start their talks with high-minded references 
to real consciousness (perhaps even say hard problem at some point), 
but then it would gradually become clear that the actual content of 
their talk was drifting into a discussion of where in the brain you find 
correlates of the subject's sense of awareness.  In other words, they 
wanted to know which bits of the brain needed to be firing if the 
subject was to have explicit knowledge of events ... which is the same 
as awakeness.


All the arguments for localization within the brain seem to fall back 
into this mode.  We could pick one of them at random, I am sure, and 
analyze it carefully, and find that it either says (a) that hard-problem 
consciousness is inside the brain because the author thinks so (with no 
actual reason), or (b) awakeness-consciousness is located inside the 
brain because the subject is only aware of things when some place is active.


The problem, I think, is that when you insist on the author of the idea 
saying exactly what C is, they cannot be specific enough to get to the 
point where any concept of physical 

Re : [agi] CEMI Field

2008-01-25 Thread Bruno Frandemiche
conciousness is meta-reflexion on reflexion as meta-cognition for cognition
if you know where is the reflexion,you know where is conciousness
oops...too simple
bruno


- Message d'origine 
De : Richard Loosemore [EMAIL PROTECTED]
À : agi@v2.listbox.com
Envoyé le : Vendredi, 25 Janvier 2008, 16h38mn 36s
Objet : Re: [agi] CEMI Field

Günther Greindl wrote:
 Hi Richard,
 
 It says, in effect Hey, the explanation of consciousness is that it 
 is caused by X where X is something that explains absolutely nothing 
 about whatever consciousness is supposed to be.  Moreover, the person 
 espousing the theory, you can bet, will not be able to state exactly 
 what they think consciousness actually is... they will just be able 
 to tell you that, whatever it is, their candidate explains it.
 
 I know why you are being sceptical - my initial reactions to theories of 
 consciousness are usually the same, because people who propose them 
 usually have ulterior motives (special status of humans, dualism, 
 religion whatever..)
 
 I don't think that this scientist has these motives, as he is strictly 
 on the physicalist side. BTW, he also writes a bit about free will, I 
 disagree with him on this notion as I can not see where free will could 
 enter a physicalist picture of reality. (But that is anther controversy ;-)
 
 The thing is this: consciousness (the basic phenomenon of awareness) 
 needs explaining, and I believe science can explain it in physicalist way.

I take a very similar position, in some ways.  I have actually gone so 
far as to build a theory that I believe *does* address the questions of 
what we mean by consciousness, as well as the further question o what 
it actually is.  (I gave this as a poster presentation at the last 
Tucson conference).  My conclusion is a strange one that does admit that 
there is a thing called consciousness - there is definitely something 
that needs to be explained - but at the same time I believe it has a 
kind of unique status, so the physicalist/dualist controversy becomes 
finessed.

My only problem with people like the one you cite is that they often 
declare that consciousness is X without being clear about what they 
really think consciousness is.  I agree that some of them have 
ulterior motives, but I would be willing to forgive them that, if only 
they would start by being clear about what the C-word actually means :-).

I am (of course!) pushing my own theory a bit here, because I believe 
that what happens when you try to really pin down the concept of C is 
that, in fact, the next question gets answered almost automatically.



 I could just as easily say that consciousness is explained by ... hair 
 follicles.  This Hair Follicle Theory of Consciousness would have the 
 same qualifications to be considered the correct theory.
 
 Well no - because, esp. via brain lesions etc, we can fairly definitly 
 locate consciousness _in_ the brain (or the whole brain) - but not 
 _outside_ the brain.
 
 So we just have to look _where_ in the brain this happens. We can now 
 endorse modular theories or comprehensive ones, but for this is not very 
 satisfactory: consciousness is being felt as a unity, and I can't quite 
 see how individual neurons firing can lead to a unified feeling: this 
 also goes for synchronous firing, because neuron A does not know that B 
 fires syncronously, so how could it make a difference at the neural 
 level (I hope you know what I mean, I can elaborate).

Oh yes, I know exactly what you mean:  well put.

 
 But the EM field is a unity - the field is caused by the summary of 
 _all_ neurons in the brain, and it is also _caused_ by the electric 
 potentials in all neurons.
 
 Also, the EM field is perfectly physicalist and does not invoke QM 
 mysteries (a la Hameroff/Penrose which I find bogus and has been 
 quantitatively refuted by Max Tegmark)

There are some problems with simply saying that C is lcoated in the 
brain:  mostly these problems have to do with slippage of the meaning of 
C from the hard-problem version (the problem of explaining pure 
subjectivity) to the being awake version.  That was certaily the 
biggest problem at the talks I saw at the Tucson conference:  many of 
the neuroscentists would start their talks with high-minded references 
to real consciousness (perhaps even say hard problem at some point), 
but then it would gradually become clear that the actual content of 
their talk was drifting into a discussion of where in the brain you find 
correlates of the subject's sense of awareness.  In other words, they 
wanted to know which bits of the brain needed to be firing if the 
subject was to have explicit knowledge of events ... which is the same 
as awakeness.

All the arguments for localization within the brain seem to fall back 
into this mode.  We could pick one of them at random, I am sure, and 
analyze it carefully, and find that it either says (a) that hard-problem 
consciousness is inside the brain 

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-25 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
 This whole scenario is filled with unjustified, unexamined assumptions.
 
 For example, you suddenly say I foresee a problem when the collective 
 computing power of the network exceeds the collective computing power of 
 the humans that administer it.  Humans will no longer be able to keep up 
 with the complexity of the system...
 
 Do you mean collective intelligence?  Because if you mean collective 
 computing power I cannot see what measure you are using (my laptop has 
 greater computing power than me already, because it can do more 
 arithmetic sums in one second than I have done in my life so far).  And 
 either way, this comes right after a great big AND THEN A MIRACLE 
 HAPPENS step ...!  You were talking about lots of dumb, specialized 
 agents distributed around the world, and then all of a sudden you start 
 talking as if they could be intelligent.  Why should anyone believe they 
 would spontaneously do that?  First they are agents, then all of a 
 sudden they are AGIs and they leave us behind:  I see no reason to allow 
 that step in the argument.
 
 In short, it looks like an even bigger non-sequiteur than before.

Yes, I mean collective intelligence.  The miracle is that any interface to
the large network of simple machines will appear intelligent, in the same way
that Google can make a person appear to know a lot more than they do.  It is
hard to predict what this collective intelligence will do, in the same way as
it is hard to predict human behavior by studying individual neurons.

I don't know if my outline for an infrastructure for AGI will be built as I
designed it, but I believe something like it WILL be built, probably ad-hoc
and very complex, because it has economic value.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89895239-3ad383


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-25 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:
You must 
demonstrate some reason why the collective net of dumb computers will be 
intelligent:  it is not enough to simply assert that they will, or 
might, become intelligent.


The intelligence comes from an infrastructure that routes messages to the
right experts.  I know it is hard to imagine because distributed search
engines haven't been built yet, but it is similar to the way that Google makes
people appear smarter.  In my thesis I investigated whether distributed search
scales to large networks, and it does. http://cs.fit.edu/~mmahoney/thesis.html


Your analogy to people appearing smarter because they can use Google 
simply does not apply to the case you propose.


You suggest that a collection of *sub-intelligent* (this is crucial) 
computer programs can ad up to full intelligence just in virtue of their 
existence.


This is not the same as a collection of *already-intelligent* humans 
appearing more intelligent because they have access to a lot more 
information than they did before.


[dumb machine] + Google = dumb machine.

[smart human] + Google = smarter human.

1) There is every reason to believe that a human intelligence could 
become smarter as a result of having quick access to an internet 
knowledgebase.


2) There is absolutely no reason to believe that a bunch of 
sub-intelligent computers will get up over the threshold and become 
intelligent, just because they have access to an internet knowledgebase.


You have work to do (a lot of work!) to persuade us to accept the idea 
contained in (2).


This is similar to the machine-translation fiasco in the 1960s:  they 
believed that the only thing standing in the way of a full-up 
translation system was lots of good dictionary lookup.  It simply was 
not true:  a dictionary maketh not a mind.


As for you last comment that The intelligence comes from an 
infrastructure that routes messages to the right experts  this 
simply begs the question. If the infrastructure were smart enough to 
always know how to find the right expert, the infrastructure would BE 
the intelligence, and the experts hat it finds would just be a bunch 
of dictionaries or subcomponents.  You are implicitly assuming 
intelligence in that infrastructure, without showing where the 
intelligence comes from.  Certainly you give no reason why we should 
believe that the infrastructure would spontaneously become intelligent 
without us doing a lot of work.


If we knew how to put the intelligence into that infrastructure we 
would know how to put it into other places, and then (once again) we are 
back to the scenario that I discussed, where someone has explicitly 
figured out how to build an intelligence, and then deliberately chooses 
what to do with it (i.e., there is no accidental emergence, beyond human 
control).



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89931136-e22764


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-25 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
 You suggest that a collection of *sub-intelligent* (this is crucial) 
 computer programs can ad up to full intelligence just in virtue of their 
 existence.
 
 This is not the same as a collection of *already-intelligent* humans 
 appearing more intelligent because they have access to a lot more 
 information than they did before.
 
 [dumb machine] + Google = dumb machine.
 
 [smart human] + Google = smarter human.

My point of concern is when individual machines (not the whole network) exceed
individual brains in intelligence.  They can't yet, but they will.  Google
already knows more than any human, and can retrieve the information faster,
but it can't launch a singularity.  When your computer can write and debug
software faster and more accurately than you can, then you should worry.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89960966-ec355b


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-25 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:
You suggest that a collection of *sub-intelligent* (this is crucial) 
computer programs can ad up to full intelligence just in virtue of their 
existence.


This is not the same as a collection of *already-intelligent* humans 
appearing more intelligent because they have access to a lot more 
information than they did before.


[dumb machine] + Google = dumb machine.

[smart human] + Google = smarter human.


My point of concern is when individual machines (not the whole network) exceed
individual brains in intelligence.  They can't yet, but they will.  Google
already knows more than any human, and can retrieve the information faster,
but it can't launch a singularity.  When your computer can write and debug
software faster and more accurately than you can, then you should worry.


I think this conversation is going nowhere:  your above paragraph once 
again ignores everything I have said up to now.


No computer is going to start writing and debugging software faster and 
more accurately than we can UNLESS we design it to do so, and during the 
design process we will have ample opportunity to ensre that the machine 
will never be able to pose a danger of any kind.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90009288-64a72b