Re: [agi] COMP = false

2008-10-04 Thread William Pearson
Hi Colin,

I'm not entirely sure that computers can implement consciousness. But
I don't find your arguments sway me one way or the other. A brief
reply follows.

2008/10/4 Colin Hales [EMAIL PROTECTED]:
 Next empirical fact:
 (v) When  you create a turing-COMP substrate the interface with space is
 completely destroyed and replaced with the randomised machinations of the
 matter of the computer manipulating a model of the distal world. All actual
 relationships with the real distal external world are destroyed. In that
 circumstance the COMP substrate is implementing the science of an encounter
 with a model, not an encounter with the actual distal natural world.

 No amount of computation can make up for that loss, because you are in a
 circumstance of an intrinsically unknown distal natural world, (the novelty
 of an act of scientific observation).
 .

But humans don't encounter the world directly, else optical illusions
wouldn't exist, we would know exactly what was going on.

Take this site for example. http://www.michaelbach.de/ot/

It is impossible by physics to do vision perfectly without extra
information, but we do not do vision by any means perfectly, so I see
no need to posit an extra information source.

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-04 Thread Vladimir Nesov
Basically, you are saying that there is some unknown physics mojo
going on. The mystery of mind looks as mysterious as mystery of
physics, therefore it requires mystery of physics and can derive
further mysteriousness from it, becoming inherently mysterious. It's
bad, bad non-science.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-04 Thread Mike Tintner


Hi Colin,

Many thanks for detailed reply. You seem to be taking a long-winding 
philosophical route to asserting that intelligence depends on consciousness, 
in the sense of what I would call a sensory movie of the world - vision + 
sound/smell/taste etc.


I absolutely agree with that basic assertion - but any philosophical 
argument will have no serious interest IMO for AGI-ers.


The only way to change them is to demonstrate the *unique* properties of 
sensory pictures of the world - and why they CANNOT be reduced to 
logical/mathematical/programming/linguistic form, as AGI-ers still wildly 
delude themselves.


(Obviously evolution has taken consciousness as primary for intelligence and 
vastly more important  than logic or any form of rationality  - but AGI-ers, 
unlike the rest of the scientific world, aren't interested in evolution 
either)..


You're dealing with Helen Keller's here :)  - so you have to show them why a 
movie is essential to intelligence in *their* terms

.



Hi Mike,
I can give the highly abridged flow of the argument:

!) It refutes COMP , where COMP = Turing machine-style abstract symbol 
manipulation. In particular the 'digital computer' as we know it.
2) The refutation happens in one highly specific circumstance. In being 
false in that circumstance it is false as a general claim.
3) The circumstances:  If COMP is true then it should be able to implement 
an artificial scientist with the following faculties:
   (a) scientific behaviour (goal-delivery of a 'law of nature', an 
abstraction BEHIND the appearances of the distal natural world, not merely 
the report of what is there),

   (b) scientific observation based on the visual scene,
   (c) scientific behaviour in an encounter with radical novelty. (This is 
what humans do)


The argument's empirical knowledge is:
1) The visual scene is visual phenomenal consciousness. A highly specified 
occipital lobe deliverable.
2) In the context of a scientific act, scientific evidence is 'contents of 
phenomenal consciousness'. You can't do science without it. In the context 
of this scientific act, visual P-consciousness and scientific evidence are 
identities. P-consciousness is necessary but on its own is not sufficient. 
Extra behaviours are needed, but these are a secondary consideration here.


NOTE: Do not confuse scientific observation  with the scientific 
measurement, which is a collection of causality located in the distal 
external natural world. (Scientific measurement is not the same thing as 
scientific evidence, in this context). The necessary feature of a visual 
scene is that it operate whilst faithfully inheriting the actual causality 
of the distal natural world. You cannot acquire a law of nature without 
this basic need being met.


3) Basic physics says that it is impossible for a brain to create a visual 
scene using only the inputs acquired by the peripheral stimulus received 
at the retina. This is due to fundamentals of quantum degeneracy. 
Basically there are an infinite number of distal external worlds that can 
deliver the exact same photon impact. The transduction that occurs in the 
retinal rod/cones is entirely a result of protein isomerisation. All 
information about distal origins is irretievably gone. An impacting photon 
could have come across the room or across the galaxy. There is no 
information about origins in the transduced data in the retina.


That established, you are then faced with a paradox:

(i) (3) says a visual scene is impossible.
(ii) Yet the brain makes one.
(iii) To make the scene some kind of access to distal spatial relations 
must be acquired as input data in addition to that from the retina.

(iv) There are only 2 places that can come from...
   (a) via matter (which we already have - retinal impact at the 
boundary that is the agent periphery)
   (b) via space (at the boundary of the matter of the brain with 
space, the biggest boundary by far).
So, the conclusion is that the brain MUST acquire the necessary data via 
the spatial boundary route. You don't have to know how. You just have no 
other choice. There is no third party in there to add the necessary data 
and the distal world is unknown. There is literally nowhere else for the 
data to come from. Matter and Space exhaust the list of options. (There is 
alway magical intervention ... but I leave that to the space cadets.)


That's probably the main novelty for the reader to  to encounter. But we 
are not done yet.


Next empirical fact:
(v) When  you create a turing-COMP substrate the interface with space is 
completely destroyed and replaced with the randomised machinations of the 
matter of the computer manipulating a model of the distal world. All 
actual relationships with the real distal external world are destroyed. In 
that circumstance the COMP substrate is implementing the science of an 
encounter with a model, not an encounter with the actual distal natural 
world.


No amount of computation can 

Re: [agi] I Can't Be In Two Places At Once.

2008-10-04 Thread Mike Tintner
Matthias: I think it is extremely important, that we give an AGI no bias 
about

space and time as we seem to have.

Well, I ( possibly Ben) have been talking about an entity that is in many 
places at once - not in NO place. I have no idea how you would swing that - 
other than what we already have - machines that are information-processors 
with no sense of identity at all.Do you? 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] I Can't Be In Two Places At Once.

2008-10-04 Thread Stan Nilsen

Mike Tintner wrote:
Matthias: I think it is extremely important, that we give an AGI no bias 
about

space and time as we seem to have.

Well, I ( possibly Ben) have been talking about an entity that is in 
many places at once - not in NO place. I have no idea how you would 
swing that - other than what we already have - machines that are 
information-processors with no sense of identity at all.Do you?




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com


Seems hard to imagine information processing without identity. 
Intelligence is about invoking methods.  Methods are created because 
they are expected to create a result.  The result is the value - the 
value that allows them to be selected from many possible choices.


Identity, involves placing ones powers into a situation that is unique 
according to place and time.  If it's Matt's global brain, then it will 
be critical for agents to grasp the value factors - which come from the 
time and place one inhabits.


Is it the time and space bias that is the issue?  If so, what is the 
bias that humans have which machines shouldn't?


just quick reactive thoughts...
Stan


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


AW: [agi] I Can't Be In Two Places At Once.

2008-10-04 Thread Dr. Matthias Heger
From my points 1. and 2. it should be clear that I was not talking about a
distributed AGI which is in NO place. The AGI you mean consists of several
parts which are in different places. But this is already the case with the
human body. The only difference is, that the parts of the distributed AGI
can be placed several kilometers from each other. But this is only a
quantitative and not a qualitative point.

Now to my statement of an useful representation of space and time for AGI.
We know, that our intuitive understanding of space and time works very well
in our life. But the ultimate goal of AGI is that it can solve problems
which are very difficult for us. If we give an AGI bias of a model of space
and time which is not state of the art of the knowledge we have from
physics, then we give AGI a certain limitation which we ourselves suffer
from and which is not necessary for an AGI.
This point has nothing to do with the question whether the AGI is
distributed or not.
I mentioned this point because your question has relations to the more
fundamental question whether and which bias we should give AGI for the
representation of space and time.


Ursprüngliche Nachricht-
Von: Mike Tintner [mailto:[EMAIL PROTECTED] 
Gesendet: Samstag, 4. Oktober 2008 14:13
An: agi@v2.listbox.com
Betreff: Re: [agi] I Can't Be In Two Places At Once.

Matthias: I think it is extremely important, that we give an AGI no bias 
about
space and time as we seem to have.

Well, I ( possibly Ben) have been talking about an entity that is in many 
places at once - not in NO place. I have no idea how you would swing that - 
other than what we already have - machines that are information-processors 
with no sense of identity at all.Do you? 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Risks of competitive message routing (was Re: [agi] Let's face it, this is just dumb.)

2008-10-04 Thread Ben Goertzel
On Fri, Oct 3, 2008 at 9:57 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 --- On Fri, 10/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:
  You seem to misunderstand the notion of a Global Brain, see
 
  http://pespmc1.vub.ac.be/GBRAIFAQ.html
 
  http://en.wikipedia.org/wiki/Global_brain

 You are right. That is exactly what I am proposing.



It's too bad you missed the Global Brain 0 workshop that Francis Heylighen
and I organized in Brussels in 2001 ...

Some larger follow-up Global Brain conferences were planned, but Francis and
I both got distracted by other things

It would be an exaggeration to say that any real collective conclusions were
arrived at, during the workshop, but it was certainly
interesting...





 I am open to alternative suggestions.



Well, what I suggested in my 2002 book Creating Internet Intelligence was
essentially a global brain based on a hybrid model:

-- a human-plus-computer-network global brain along the lines of what you
and Heylighen suggest

coupled with

-- a superhuman AI mind, that interacts with and is coupled with this global
brain

To use a simplistic metaphor,

-- the superhuman AI mind at the center of the hybrid global brain would
provide an overall goal system and attentional-focus, and

-- the human-plus-computer-network portion of the hybrid global brain would
serve as a sort of unconscious for the hybrid global brain...

This is one way that humans may come to, en masse, interact with superhuman
non-human AI

Anyway this was a fun line of thinking but since that point I diverted
myself more towards the creation of the superhuman-AI component

At the time I had a lot of ideas about how to modify Internet infrastructure
so as to make it more copacetic to the emergence of a
human-plus-computer-network, collective-intelligence type global brain.   I
think many of those ideas could have worked, but they are not the direction
the development of the Net worked, and obviously I (like you) lack the
influence to nudge the Net-masters in that direction.  Keeping a
build-a-superhuman-AI project moving is not easy either, but it's a more
tractable task...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


AW: [agi] I Can't Be In Two Places At Once.

2008-10-04 Thread Dr. Matthias Heger
Stan wrote:

Seems hard to imagine information processing without identity. 
Intelligence is about invoking methods.  Methods are created because 
they are expected to create a result.  The result is the value - the 
value that allows them to be selected from many possible choices.


Identity can be distributed in space. My conscious model of myself is not
located at a single point in space. I identify myself with my body. I do not
even have to know that I have a brain. But my body is distributed in space.
It is not a point. This is also the case with my conscious model of myself
(= model of my body).

Furthermore if you think more from a computer scientist point of view: Even
your brain is distributed in space and is not at a single place. Your brain
consists of a huge amount of processors where each processor is at a
different place. So I see no new problem with distributed AGI at all.

Stan wrote

Is it the time and space bias that is the issue?  If so, what is the 
bias that humans have which machines shouldn't?


I don't know whether it is bias for space and time representation or it
comes from the bias within our learning algorithms. But all human create a
model of their environment with the law that a physical object has a
certain position at a certain time. Also we think intuitively that the
distance to a point does not depend on the velocity towards this point.
These were two examples which are completely wrong as we know from modern
physics. Why is it so important for an AGI to  know this?
Because AGI should help us with the progress in technology. And the most
promising open field in technology are within the nanoworld and the
macrocosm. It should be useful if an AGI has an intuitive understanding of
the laws in these worlds.
We should avoid to rebuild our own weakness within AGI.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] I Can't Be In Two Places At Once.

2008-10-04 Thread Mike Tintner

Matthias,

First, I see both a human body-brain and a distributed entity, such as a 
computer network,  as *physically integrated* units, with a sense of their 
physical integrity. The fascinating thought, (perhaps unrealistic) for me 
was of being able to physically look at a scene or scenes, from different 
POV's more or less simultaneously - a thought worth exploring.


Second, your idea, AFAICT, of an unbiassed-as-to-time-and-space 
intelligence, while v. vague, is also worth exploring. I suspect the 
all-important fallacy here is of pure objectivity - the idea that an 
object or scene or world can be depicted WITHOUT any location or reference 
or comparison. When we talk of time and space,  which are fictions that have 
no concrete existence -  we are really talking (no?) of frameworks we use to 
locate and refer other things to. Clocks. 3/4 dimensional grids... All 
things have to be referred and compared to other things in order to be 
understood, which is an inevitably biassed process. So is there any such 
thing as your non-bias?   Just my first stumbling thoughts.





Matthias:


From my points 1. and 2. it should be clear that I was not talking about a

distributed AGI which is in NO place. The AGI you mean consists of several
parts which are in different places. But this is already the case with the
human body. The only difference is, that the parts of the distributed AGI
can be placed several kilometers from each other. But this is only a
quantitative and not a qualitative point.

Now to my statement of an useful representation of space and time for AGI.
We know, that our intuitive understanding of space and time works very well
in our life. But the ultimate goal of AGI is that it can solve problems
which are very difficult for us. If we give an AGI bias of a model of space
and time which is not state of the art of the knowledge we have from
physics, then we give AGI a certain limitation which we ourselves suffer
from and which is not necessary for an AGI.
This point has nothing to do with the question whether the AGI is
distributed or not.
I mentioned this point because your question has relations to the more
fundamental question whether and which bias we should give AGI for the
representation of space and time.


Ursprüngliche Nachricht-
Von: Mike Tintner [mailto:[EMAIL PROTECTED]
Gesendet: Samstag, 4. Oktober 2008 14:13
An: agi@v2.listbox.com
Betreff: Re: [agi] I Can't Be In Two Places At Once.

Matthias: I think it is extremely important, that we give an AGI no bias
about
space and time as we seem to have.

Well, I ( possibly Ben) have been talking about an entity that is in many
places at once - not in NO place. I have no idea how you would swing that -
other than what we already have - machines that are information-processors
with no sense of identity at all.Do you?




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-04 Thread Colin Hales

Hi Will,
It's not an easy thing to fully internalise the implications of quantum 
degeneracy. I find physicists and chemists have no trouble accepting it, 
but in the disciplines above that various levels of mental brick walls 
are in place. Unfortunately physicists and chemists aren't usually asked 
to create vision!... I inhabit an extreme multidisciplinary zone. This 
kind of mental resistance comes with the territory. All I can say is 
'resistance is futile, you will be assimilated' ... eventually. :-) It's 
part of my job to enact the necessary advocacy. In respect of your 
comments I can offer the following:


You are exactly right: humans don't encounter the world directly (naive 
realism). Nor are we entirely operating from a cartoon visual 
fantasy(naive solipsism). You are also exactly right in that vision is 
not 'perfect'. It has more than just a level of indirectness in 
representation, it can malfunction and be fooled - just as you say. In 
the benchmark behaviour: scientific behaviour, we know scientists have 
to enact procedures (all based around the behaviour called 
'objectivity') which minimises the impact of these aspects of our 
scientific observation system.


However, this has nothing to say about the need for an extra information 
source. necessary for there is not enough information in the signals to 
do the job. This is what you cannot see. It took me a long while to 
discard the tendency to project my mental capacity  into the job the 
brain has when it encounters a retinal data stream. In vision processing 
using computing we know the structure of the distal natural world. We 
imagine the photon/CCD camera chip measurements to be the same as that 
of the retina. It looks like a simple reconstruction job.


But it is not like that at all. It is impossible to tell, from the 
signals in their natural state in the brain, whether they are about 
vision or sound or smell. They all look the same. So I did not 
completely reveal the extent of the retinal impact/visual scene 
degeneracy in my post. The degeneracy operates on multiple levels. 
Signal encoding into standardised action potentials is another level.


Maybe I can just paint a mental picture of the job the brain has to do. 
Imagine this:


You have no phenomenal consciousness at all. Your internal life is of a 
dreamless  sleep.

Except ... for a new perceptual mode called Wision.
Looming in front of you embedded in a roughly hemispherical blackness is 
a gigantic array of numbers.

The numbers change.

Now:
a) make a visual scene out of it representing the world outside: convert 
Wision into Vision.
b) do this without any information other than the numbers in front of 
you and without assuming you have any a-priori knowledge of the outside 
world.


That is the job the brain has. Resist the attempt to project your own 
knowledge into the circumstance. You will find the attempt futile.


Regards,

Colin








William Pearson wrote:

Hi Colin,

I'm not entirely sure that computers can implement consciousness. But
I don't find your arguments sway me one way or the other. A brief
reply follows.

2008/10/4 Colin Hales [EMAIL PROTECTED]:
  

Next empirical fact:
(v) When  you create a turing-COMP substrate the interface with space is
completely destroyed and replaced with the randomised machinations of the
matter of the computer manipulating a model of the distal world. All actual
relationships with the real distal external world are destroyed. In that
circumstance the COMP substrate is implementing the science of an encounter
with a model, not an encounter with the actual distal natural world.

No amount of computation can make up for that loss, because you are in a
circumstance of an intrinsically unknown distal natural world, (the novelty
of an act of scientific observation).
.



But humans don't encounter the world directly, else optical illusions
wouldn't exist, we would know exactly what was going on.

Take this site for example. http://www.michaelbach.de/ot/

It is impossible by physics to do vision perfectly without extra
information, but we do not do vision by any means perfectly, so I see
no need to posit an extra information source.

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
  




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-04 Thread John LaMuth
 Original Message - 
  From: Colin Hales 
  To: agi@v2.listbox.com 
  Sent: Saturday, October 04, 2008 3:22 PM
  Subject: Re: [agi] COMP = f

  ...

  You are exactly right: humans don't encounter the world directly (naive 
realism). Nor are we entirely operating from a cartoon visual fantasy(naive 
solipsism). 

  ^^

  It is closer to the latter

  How do you explain the vividness of DREAMS ...

  They have the same desynchronized EEG wave patterns as waking CONS. -- 
indistinguishable ! !

  Solution ?  -- We secrete our own awareness/consciousness --  Solipsism is 
Painless

  JLM

  http://www.forebrain.org 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-04 Thread Ben Goertzel
The argument seems wrong to me intuitively, but I'm hard-put to argue
against it because the terms are so unclearly defined ... for instance I
don't really know what you mean by a visual scene ...

I can understand that to create a form of this argument worthy of being
carefully debated, would be a lot more work than writing this summary email
you've given.

So, I agree with your judgment not to try to extensively debate the argument
in its current sketchily presented form.

If you do choose to present it carefully at some point, I encourage you to
begin by carefully defining all the terms involved ... otherwise it's really
not possible to counter-argue in a useful way ...

thx
ben g

On Sat, Oct 4, 2008 at 12:31 AM, Colin Hales
[EMAIL PROTECTED]wrote:

 Hi Mike,
 I can give the highly abridged flow of the argument:

 !) It refutes COMP , where COMP = Turing machine-style abstract symbol
 manipulation. In particular the 'digital computer' as we know it.
 2) The refutation happens in one highly specific circumstance. In being
 false in that circumstance it is false as a general claim.
 3) The circumstances:  If COMP is true then it should be able to implement
 an artificial scientist with the following faculties:
   (a) scientific behaviour (goal-delivery of a 'law of nature', an
 abstraction BEHIND the appearances of the distal natural world, not merely
 the report of what is there),
   (b) scientific observation based on the visual scene,
   (c) scientific behaviour in an encounter with radical novelty. (This is
 what humans do)

 The argument's empirical knowledge is:
 1) The visual scene is visual phenomenal consciousness. A highly specified
 occipital lobe deliverable.
 2) In the context of a scientific act, scientific evidence is 'contents of
 phenomenal consciousness'. You can't do science without it. In the context
 of this scientific act, visual P-consciousness and scientific evidence are
 identities. P-consciousness is necessary but on its own is not sufficient.
 Extra behaviours are needed, but these are a secondary consideration here.

 NOTE: Do not confuse scientific observation  with the scientific
 measurement, which is a collection of causality located in the distal
 external natural world. (Scientific measurement is not the same thing as
 scientific evidence, in this context). The necessary feature of a visual
 scene is that it operate whilst faithfully inheriting the actual causality
 of the distal natural world. You cannot acquire a law of nature without this
 basic need being met.

 3) Basic physics says that it is impossible for a brain to create a visual
 scene using only the inputs acquired by the peripheral stimulus received at
 the retina. This is due to fundamentals of quantum degeneracy. Basically
 there are an infinite number of distal external worlds that can deliver the
 exact same photon impact. The transduction that occurs in the retinal
 rod/cones is entirely a result of protein isomerisation. All information
 about distal origins is irretievably gone. An impacting photon could have
 come across the room or across the galaxy. There is no information about
 origins in the transduced data in the retina.

 That established, you are then faced with a paradox:

 (i) (3) says a visual scene is impossible.
 (ii) Yet the brain makes one.
 (iii) To make the scene some kind of access to distal spatial relations
 must be acquired as input data in addition to that from the retina.
 (iv) There are only 2 places that can come from...
   (a) via matter (which we already have - retinal impact at the
 boundary that is the agent periphery)
   (b) via space (at the boundary of the matter of the brain with space,
 the biggest boundary by far).
 So, the conclusion is that the brain MUST acquire the necessary data via
 the spatial boundary route. You don't have to know how. You just have no
 other choice. There is no third party in there to add the necessary data and
 the distal world is unknown. There is literally nowhere else for the data to
 come from. Matter and Space exhaust the list of options. (There is alway
 magical intervention ... but I leave that to the space cadets.)

 That's probably the main novelty for the reader to  to encounter. But we
 are not done yet.

 Next empirical fact:
 (v) When  you create a turing-COMP substrate the interface with space is
 completely destroyed and replaced with the randomised machinations of the
 matter of the computer manipulating a model of the distal world. All actual
 relationships with the real distal external world are destroyed. In that
 circumstance the COMP substrate is implementing the science of an encounter
 with a model, not an encounter with the actual distal natural world.

 No amount of computation can make up for that loss, because you are in a
 circumstance of an intrinsically unknown distal natural world, (the novelty
 of an act of scientific observation).
 .
 = COMP is false.
 ==
 OK.  There are subtleties here.
 The 

Re: [agi] COMP = false

2008-10-04 Thread Matt Mahoney
--- On Sat, 10/4/08, Colin Hales [EMAIL PROTECTED] wrote:

Maybe I can just paint a mental picture of the job the brain has to do.
Imagine this:

You have no phenomenal consciousness at all. Your internal life is of a
dreamless  sleep.

Except ... for a new perceptual mode called Wision. 

Looming in front of you embedded in a roughly hemispherical blackness
is a gigantic array of numbers.

The numbers change.

Now: 

a) make a visual scene out of it representing the world outside:
convert Wision into Vision.

b) do this without any information other than the numbers in front of
you and without assuming you have any a-priori knowledge of the outside
world.

That is the job the brain has. Resist the attempt to project your own
knowledge into the circumstance. You will find the attempt futile. 

By visual scene, I assume you mean the original image impressed on your 
retina, expressed as an array of pixels. The problem you describe is to 
reconstruct this image given the highly filtered and compressed signals that 
make it through your visual perceptual system, like when an artist paints a 
scene from memory. Are you saying that this process requires a consciousness 
because it is otherwise not computable? If so, then I can describe a simple 
algorithm that proves you are wrong: try all combinations of pixels until you 
find one that looks the same.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-04 Thread Mike Tintner
Matt:The problem you describe is to reconstruct this image given the highly 
filtered and compressed signals that make it through your visual perceptual 
system, like when an artist paints a scene from memory. Are you saying that 
this process requires a consciousness because it is otherwise not 
computable? If so, then I can describe a simple algorithm that proves you 
are wrong: try all combinations of pixels until you find one that looks the 
same.


Matt,

Simple? Well, you're good at maths. Can we formalise what you're arguing? A 
computer screen, for argument's sake.  800 x 600, or whatever. Now what is 
the total number of (diverse) objects that can be captured on that screen, 
and how long would it take your algorithm to enumerate them?


(It's an interesting question, because my intuition says to me that there is 
an infinity of objects that can be depicted on any screen (or drawn on a 
page). Are you saying that there aren't? - that you can in effect predict 
new objects as yet unconceived,  new kinds of ipods/inventions/evolved 
species, say,  -at least in terms of their representations on a flat 
screen - with an algorithm? ) 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-04 Thread Ben Goertzel
On Sat, Oct 4, 2008 at 8:37 PM, Mike Tintner [EMAIL PROTECTED]wrote:

 Matt:The problem you describe is to reconstruct this image given the highly
 filtered and compressed signals that make it through your visual perceptual
 system, like when an artist paints a scene from memory. Are you saying that
 this process requires a consciousness because it is otherwise not
 computable? If so, then I can describe a simple algorithm that proves you
 are wrong: try all combinations of pixels until you find one that looks the
 same.

 Matt,

 Simple? Well, you're good at maths. Can we formalise what you're arguing? A
 computer screen, for argument's sake.  800 x 600, or whatever. Now what is
 the total number of (diverse) objects that can be captured on that screen,
 and how long would it take your algorithm to enumerate them?

 (It's an interesting question, because my intuition says to me that there
 is an infinity of objects that can be depicted on any screen (or drawn on a
 page). Are you saying that there aren't? -



There is a finite number of possible screen-images, at least from the point
of view of the process sending digital signals to the screen.

If the monitor refreshes each pixel N times per second, then over an
interval of T seconds, if each pixel can show C colors, then there are

C^(N*T*800*600)

possible different scenes showable on the screen during that time period

A big number but finite!

Drawing on a page is a different story, as it gets into physics questions,
but it seems rather likely there is a finite number of pictures on the page
that are distinguishable by a human eye.

So, whether or not an infinite number of objects exist in the universe, only
a finite number of distinctions can be drawn on a monitor (for certain), or
by an eye (almost surely)

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-04 Thread Mike Tintner
Ben,

Thanks for reply. I'm a bit lost though. How does this formula take into 
account the different pixel configurations of different objects? (I would have 
thought we can forget about the time of display and just concentrate on the 
configurations of points/colours, but no doubt I may be wrong).

Roughly how large a figure do you come up with, BTW?

I guess a related question is the old one - given a keyboard of letters, what 
are the total number of works possible with say 500,000 key presses, and how 
many 500,000-press attempts will it (or could it) take the proverbial monkey to 
type out, say, a 50,000 word play called Hamlet?

In either case, I would imagine, the numbers involved are too large to be 
practically manageable in, say, this universe, (which seems to be a common 
yardstick). Comments?   The maths here does seem important, because it seems to 
me to be the maths of creativity - and creative possibilities - in a given 
medium. A somewhat formalised maths, since creators usually find ways to 
transcend and change their medium - but useful nevertheless. Is such a maths 
being pursued?

  On Sat, Oct 4, 2008 at 8:37 PM, Mike Tintner [EMAIL PROTECTED] wrote:

Matt:The problem you describe is to reconstruct this image given the highly 
filtered and compressed signals that make it through your visual perceptual 
system, like when an artist paints a scene from memory. Are you saying that 
this process requires a consciousness because it is otherwise not computable? 
If so, then I can describe a simple algorithm that proves you are wrong: try 
all combinations of pixels until you find one that looks the same.

Matt,

Simple? Well, you're good at maths. Can we formalise what you're arguing? A 
computer screen, for argument's sake.  800 x 600, or whatever. Now what is the 
total number of (diverse) objects that can be captured on that screen, and how 
long would it take your algorithm to enumerate them?

(It's an interesting question, because my intuition says to me that there 
is an infinity of objects that can be depicted on any screen (or drawn on a 
page). Are you saying that there aren't? -


  There is a finite number of possible screen-images, at least from the point 
of view of the process sending digital signals to the screen.

  If the monitor refreshes each pixel N times per second, then over an interval 
of T seconds, if each pixel can show C colors, then there are

  C^(N*T*800*600)

  possible different scenes showable on the screen during that time period

  A big number but finite!

  Drawing on a page is a different story, as it gets into physics questions, 
but it seems rather likely there is a finite number of pictures on the page 
that are distinguishable by a human eye.  

  So, whether or not an infinite number of objects exist in the universe, only 
a finite number of distinctions can be drawn on a monitor (for certain), or by 
an eye (almost surely)


  ben g


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-10-04 Thread Dimitry Volfson

Ben Goertzel wrote:




No, the mainstream method of extracting knowledge from text (other
than manually) is to ignore word order. In artificial languages,
you have to parse a sentence before you can understand it. In
natural language, you have to understand the sentence before you
can parse it.



More exactly: in natural language, you have to understand the sentence 
before you can disambiguate amongst the roughly 1-50 
(syntactically-correct-but-not-necessarily-meaningful) parses that 
contemporary parsers provide.


-- Ben
I don't know. People don't fully understand most of what they read. They 
just understand enough for their own purposes.


And a lot of what they do understand is: the motives of the person 
communicating in communicating what they do. People would never 
communicate if they didn't have some (self-beneficial) purpose to do so. 
And this is a lens we always look through in interpreting information 
coming from some source. Managing the purposes others see in our own 
communication -- is also an important component in how humans communicate.


Also, human communication comes in bite-sized chunks. Because humans 
would not be able to understand an extremely long sentence that might 
(to someone who could understand it) communicate more accurately. We 
have to set up an idea -- frame it -- before we introduce new concepts 
or new scopes and views of the information. Thus the concept of a 
Main-Idea-Sentence in a paragraph.


- Dimitry

Save on Emergency Alert Systems. Click here.
http://thirdpartyoffers.juno.com/TGL2141/fc/Ioyw6i3mXtacqvG9l7cxMDI9HIpEjLtcD22CuZMOjXGPI1ZH1DkRWf/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-04 Thread Ben Goertzel
Ok, at a single point in time on a 600x400 screen, if one is using 24-bit
color (usually called true color) then the number of possible images is

2^(600x400x24)

which is, roughly, 10 with a couple million zeros after it ... way bigger
than a googol, way way smaller than a googolplex ;-)

This is a large number, but so what?

Of course, the human eye would not be able to tell the difference between
all these different images; that's a whole different story...

I don't see why these middle-school calculations are of interest?? ... this
has nothing to do with any of the philosophical issues under discussion,
does it?

ben

On Sat, Oct 4, 2008 at 9:22 PM, Mike Tintner [EMAIL PROTECTED]wrote:

  Ben,

 Thanks for reply. I'm a bit lost though. How does this formula take into
 account the different pixel configurations of different objects? (I would
 have thought we can forget about the time of display and just concentrate on
 the configurations of points/colours, but no doubt I may be wrong).

 Roughly how large a figure do you come up with, BTW?

 I guess a related question is the old one - given a keyboard of letters,
 what are the total number of works possible with say 500,000 key presses,
 and how many 500,000-press attempts will it (or could it) take the
 proverbial monkey to type out, say, a 50,000 word play called Hamlet?

 In either case, I would imagine, the numbers involved are too large to be
 practically manageable in, say, this universe, (which seems to be a common
 yardstick). Comments?   The maths here does seem important, because it seems
 to me to be the maths of creativity - and creative possibilities - in a
 given medium. A somewhat formalised maths, since creators usually find ways
 to transcend and change their medium - but useful nevertheless. Is such a
 maths being pursued?


 On Sat, Oct 4, 2008 at 8:37 PM, Mike Tintner [EMAIL PROTECTED]wrote:

 Matt:The problem you describe is to reconstruct this image given the
 highly filtered and compressed signals that make it through your visual
 perceptual system, like when an artist paints a scene from memory. Are you
 saying that this process requires a consciousness because it is otherwise
 not computable? If so, then I can describe a simple algorithm that proves
 you are wrong: try all combinations of pixels until you find one that looks
 the same.

 Matt,

 Simple? Well, you're good at maths. Can we formalise what you're arguing?
 A computer screen, for argument's sake.  800 x 600, or whatever. Now what is
 the total number of (diverse) objects that can be captured on that screen,
 and how long would it take your algorithm to enumerate them?

 (It's an interesting question, because my intuition says to me that there
 is an infinity of objects that can be depicted on any screen (or drawn on a
 page). Are you saying that there aren't? -



 There is a finite number of possible screen-images, at least from the point
 of view of the process sending digital signals to the screen.

 If the monitor refreshes each pixel N times per second, then over an
 interval of T seconds, if each pixel can show C colors, then there are

 C^(N*T*800*600)

 possible different scenes showable on the screen during that time
 period

 A big number but finite!

 Drawing on a page is a different story, as it gets into physics questions,
 but it seems rather likely there is a finite number of pictures on the page
 that are distinguishable by a human eye.

 So, whether or not an infinite number of objects exist in the universe,
 only a finite number of distinctions can be drawn on a monitor (for
 certain), or by an eye (almost surely)

 ben g
  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com

 --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-04 Thread Brad Paulsen
Dr. Heger,

Point #3 is brilliantly stated.  I couldn't have expressed it better.  And
I know this because I've been trying to do so, in slightly broader terms,
for months on this list.  Insofar as providing an AGI with a human-biased
sense of space and time is required to create a human-like AGI (what I
prefer to call AG*H*I), I agree it is a mistake.

More generally, as long as AGI designers and developers insist on
simulating human intelligence, they will have to deal with the AI-complete
problem of natural language understanding.  Looking for new approaches to
this problem, many researches (including prominent members of this list)
have turned to embodiment (or virtual embodiment) for help.  IMHO, this
is not a sound tactic because human-like embodiment is, itself, probably an
AI-complete problem.

Insofar as achieving human-like embodiment and human natural language
understanding is possible, it is also a very dangerous strategy.  The
process of understanding human natural language through human-like
embodiment will, of necessity, lead to the AGHI developing a sense of self.
 After all, that's how we humans got ours (except, of course, the concept
preceded the language for it).  And look how we turned out.

I realize that an AGHI will not turn on us simply because it understands
that we're not (like) it (i.e., just because it acquired a sense of self).
  But, it could.  Do we really want to take that chance?  Especially when
it's not necessary for human-beneficial AGI (AGI without the silent H)?

Cheers,
Brad


Dr. Matthias Heger wrote:
 1. We feel ourselves not exactly at a single point in space. Instead, we
 identify ourselves with our body which consist of several parts and which
 are already at different points in space. Your eye is not at the same place
 as your hand.
 I think this is a proof that a distributed AGI will not need  to have a
 complete different conscious state for a model of its position in space than
 we already have.
 
 2.But to a certain degree you are of course right that we have a map of our
 environment and we know our position (which is not a point because of 1) in
 this map. In the brain of a rat there are neurons which each represent a
 position of the environment. Researches could predict the position of the
 rat only by looking into the rat's brain.
 
 3. I think it is extremely important, that we give an AGI no bias about
 space and time as we seem to have. Our intuitive understanding of space and
 time is useful for our life on earth but it is completely wrong as we know
 from theory of relativity and quantum physics. 
 
 -Matthias Heger
 
 
 
 -Ursprüngliche Nachricht-
 Von: Mike Tintner [mailto:[EMAIL PROTECTED] 
 Gesendet: Samstag, 4. Oktober 2008 02:44
 An: agi@v2.listbox.com
 Betreff: [agi] I Can't Be In Two Places At Once.
 
 The foundation of the human mind and system is that we can only be in one 
 place at once, and can only be directly, fully conscious of that place. Our 
 world picture,  which we and, I think, AI/AGI tend to take for granted, is 
 an extraordinary triumph over that limitation   - our ability to conceive of
 
 the earth and universe around us, and of societies around us, projecting 
 ourselves outward in space, and forward and backward in time. All animals 
 are similarly based in the here and now.
 
 But,if only in principle, networked computers [or robots] offer the 
 possibility for a conscious entity to be distributed and in several places 
 at once, seeing and interacting with the world simultaneously from many 
 POV's.
 
 Has anyone thought about how this would change the nature of identity and 
 intelligence? 
 
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com