RE: [agi] Consciousness vs. Intelligence

2008-06-08 Thread John G. Rose
 From: Dr. Matthias Heger [mailto:[EMAIL PROTECTED]
 
 The problem of consciousness is not only a hard problem because of
 unknown
 mechanisms in the brain but it is a problem of finding the DEFINITION of
 necessary conditions for consciousness.
 I think, consciousness without intelligence is not possible.
 Intelligence
 without consciousness is possible. But I am not sure whether GENERAL
 intelligence without consciousness is possible. In every case,
 consciousness
 is even more a white-box problem than intelligence.
 

For general intelligence some components and sub-components of consciousness
need to be there and some don't. And some could be replaced with a human
operator as in an augmentation-like system. Also some components could be
designed drastically different from their human consciousness counterparts
in order to achieve more desirous effects in one area or another. ALSO there
may be consciousness components integrated into AGI that humans don't have
or that are almost non-detectable in humans. And I think that the different
consciousness components and sub-components could be more dynamically
resource allocated in the AGI software than in the human mind.

John



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Consciousness vs. Intelligence

2008-06-08 Thread John G. Rose
 From: Dr. Matthias Heger [mailto:[EMAIL PROTECTED]
 
 For general intelligence some components and sub-components of
 consciousness
 need to be there and some don't. And some could be replaced with a human
 operator as in an augmentation-like system. Also some components could
 be
 designed drastically different from their human consciousness
 counterparts
 in order to achieve more desirous effects in one area or another. ALSO
 there
 may be consciousness components integrated into AGI that humans don't
 have
 or that are almost non-detectable in humans. And I think that the
 different
 consciousness components and sub-components could be more dynamically
 resource allocated in the AGI software than in the human mind.
 
 
 
 Can neither say 'yes' nor 'no'. Depends on how we DEFINE consciousness
 as a
 physical or algorithm-phenomenon. Until now we each have only an idea of
 consciousness by intrinsic phenomena of our own mind. We cannot prove
 the
 existence of consciousness in any other individual because of the lack
 of a
 better definition.
 I do not believe, that consciousness is located in a small sub-
 component.
 It seems to me, that it is an emergent behavior of a special kind of
 huge
 network of many systems. But without any proper definition this can only
 be
 a philosophical thought.
 
 

Given that other humans have similar DNA it is fair to assume that they are
conscious like us. Not 100% proof but probably good enough. Sure the whole
universe may still be rendered for the purpose of one conscious being, and
in a way that is true, and potentially that is something to take into
account.

Consciousness has multiple definitions by multiple different people. But
even without an exact definition you can still extract properties and
behaviors from it and from those, extrapolations can be made and the
beginnings of a model can be established.

Even if it is an emergent behavior of a huge network of many systems doesn't
preclude it from being described in a non-emergent way. And if it is only
uniquely describable through emergent behavior it still has some general
commonly accepted components or properties.

John






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Consciousness vs. Intelligence

2008-06-01 Thread J Storrs Hall, PhD
On Saturday 31 May 2008 10:23:15 pm, Matt Mahoney wrote:

 Unfortunately AI will make CAPTCHAs useless against spammers.  We will need 
to figure out other methods.  I expect that when we have AI, most of the 
world's computing power is going to be directed at attacking other computers 
and defending against attacks.  It is no different than evolution.  A 
competitive environment makes faster rabbits and faster foxes.  Without 
hostility, why would we need such large brains?

In the biological world, big brains evolved to support reciprocal altruism, 
which requires recognizing individuals and knowing which ones owe you one 
and vice versa.

http://en.wikipedia.org/wiki/Reciprocal_altruism

Going back to Trivers' first studies: bats that practice R.A. have brains 
three times the size of ones that don't.

Josh


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Consciousness vs. Intelligence

2008-06-01 Thread John G. Rose
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 Unfortunately AI will make CAPTCHAs useless against spammers.  We will
 need to figure out other methods.  I expect that when we have AI, most
 of the world's computing power is going to be directed at attacking
 other computers and defending against attacks.  It is no different than
 evolution.  A competitive environment makes faster rabbits and faster
 foxes.  Without hostility, why would we need such large brains?
 

AI has a long way to go to thwart CAPTCHAs altogether. There are math
CAPTCHAs (MAPTCHAs), 3-D CAPTCHAs, image rec CAPTCHAs, audio and I can think
of some that are quite difficult for AI. Actually coming up with new
CAPTCHAs would be a neat AI subproject, as well as thwarting existing ones.


  Does that have anything to do with consciousness?
  What's the test for consciousness?
 
 I think the way you are using the term, it is the Turing test.
 

Are you sure? Isn't that supposed to just differentiate between computer and
human? Not between unconscious and conscious? Unless you think consciousness
is just belief...

John



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Consciousness vs. Intelligence

2008-06-01 Thread John G. Rose
 From: Mike Tintner [mailto:[EMAIL PROTECTED]
 
 You are - if I've understood you - talking about the machinery and
 programming that produce and help to process the movie of
 consciousness.
 
 I'm not in any way denying all that or its complexity. But the first
 thing
 is to define and model consciousness, before you work out that machinery
 etc.
 
 It may sound simple to you, because you're naturally enamoured of the
 programming/machinery part, and trying to work out sophisticated
 programming
 etc may sound much smarter and more exciting.
 
 But actually starting with that simple model is much more important.
 If
 you don't know, or only half know, or are radically confused about, what
 you're trying to explain, all the technical ideas you may come up with
 will
 be worthless.
 

I understand what you are saying. But for me, I just need to get a mental
visual on it. Like, say that you are an automotive engine designer or a
class A engine mechanic. You have that engine running inside you head in
that mental CAD system. You can change parameters like octane and you know
the effects on running temperature. You can tweak the thing in your head,
increase piston diameter size, you know what's going to happen. For
consciousness I need to get a visual of the model and lock onto it then I
can get the math and the code. Easy for an insecticidal level consciousness
but when adding stuff I can't fit the whole model in my head so then it gets
difficult to change variables and test.

Kinda like, I'm trying ta think but nothin' happens!!

HEH

John





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Consciousness vs. Intelligence

2008-06-01 Thread John G. Rose
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 --- On Sun, 6/1/08, John G. Rose [EMAIL PROTECTED] wrote:
 
  AI has a long way to go to thwart CAPTCHAs altogether.
  There are math CAPTCHAs (MAPTCHAs), 3-D CAPTCHAs, image rec CAPTCHAs,
  audio and I can think
  of some that are quite difficult for AI. Actually coming up
  with new CAPTCHAs would be a neat AI subproject, as well as
  thwarting existing ones.
 
 I think security is going to be a driving force behind AI development.
 You can see which way this is heading. It is a scruffy approach, no
 grand theories, just a series of hacks and incremental improvements on
 both sides to get the job done. Ultimately all CAPTCHAs will fail and we
 will need AI to detect malicious activity downstream.
 
Does that have anything to do with consciousness?
What's the test for consciousness?
  
   I think the way you are using the term, it is the Turing test.
  
 
  Are you sure? Isn't that supposed to just differentiate
  between computer and human? Not between unconscious and
  conscious? Unless you think consciousness is just belief...
 
 Yes. A CAPTCHA is a cheap Turing test. Whether a machine thinks or is
 conscious is just an irrelevant distraction that Turing wanted to avoid.
 


OK How about this. A CAPTCHA that combines human audio and visual illusion
that evokes a realtime reaction only in a conscious physical human.  Can
audio visual illusion be used as a test for consciousness? Could it be used
to evoke a specific conscious-only reaction in a human mind? Hmm...

John






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Consciousness vs. Intelligence

2008-06-01 Thread Matt Mahoney
--- On Sun, 6/1/08, John G. Rose [EMAIL PROTECTED] wrote:
 OK How about this. A CAPTCHA that combines human audio and
 visual illusion that evokes a realtime reaction only in a conscious
 physical human.  Can audio visual illusion be used as a test for
 consciousness? Could it be used to evoke a specific conscious-only
 reaction in a human mind? Hmm...

For example, a CAPTCHA plays an audio clip and you have to match it to one of 
several images.  Or did you have something else in mind?

In any case, solving a CAPTCHA means giving the right output for some input. 
There is no reason in principle that a machine could not solve it.

What do you mean by consciousness?  Do you agree that a human brain simulated 
by a computer at the neuron level would be functionally equivalent and 
indistinguishable from human?  Or is there a mysterious force like 
consciousness that causes the machine to give a different output for some 
input?

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Consciousness vs. Intelligence

2008-06-01 Thread John G. Rose
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 --- On Sun, 6/1/08, John G. Rose [EMAIL PROTECTED] wrote:
  OK How about this. A CAPTCHA that combines human audio and
  visual illusion that evokes a realtime reaction only in a conscious
  physical human.  Can audio visual illusion be used as a test for
  consciousness? Could it be used to evoke a specific conscious-only
  reaction in a human mind? Hmm...
 
 For example, a CAPTCHA plays an audio clip and you have to match it to
 one of several images.  Or did you have something else in mind?
 

Ah, use common visual illusion techniques, build a software library. There
may already be some. Then an audio illusion library. And then find a set of
audio visual combinations that induce evocation of, say strings, like asking
the user to recognize this string of alphanumeric characters. That's all one
step. Then next step would be to find, using human volunteers, evocation
results unique to human conscious experience. Example - this illusory
sensory stimulation causes a human to sense this uniquely describable
qualia. At a percentage success rate, like 90%. BUT the ultimate would be to
be able to ring someone's bell at a 100% unique hit rate, based on DNA or
something else or just their own signature uniqueness...

 In any case, solving a CAPTCHA means giving the right output for some
 input. There is no reason in principle that a machine could not solve
 it.

The machine would have to replicate that specific human biochemical
computational ability to be able to achieve surpassed pattern recognition
fidelity. If you are able to dupe that, then you are human or better.

 
 What do you mean by consciousness?  Do you agree that a human brain
 simulated by a computer at the neuron level would be functionally
 equivalent and indistinguishable from human?  Or is there a mysterious
 force like consciousness that causes the machine to give a different
 output for some input?
 

The machine, the human brain, will give a machine specific output, a.k.a.,
an illusion. Using audio visual illusion you may be able to create cognitory
illusions whci are unique in a human brain.

John




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Consciousness vs. Intelligence

2008-05-31 Thread John G. Rose
 From: Mike Tintner [mailto:[EMAIL PROTECTED]
 
 You guys are seriously irritating me.
 
 You are talking such rubbish. But it's collective rubbish - the
 collective *non-sense* of AI. And it occurs partly because our culture
 doesn't offer a simple definition of consciousness. So let me have a
 crack
 at one.
 
 First off, let's remove consciousness-as-sentience. That's important,
 but
 it's secondary.
 
 The real issue of whether an AGI - a computer - should have
 consciousness,
 is, I suggest, should it have a world-movie ?
 
 IOW should it run a continuous sensory movie of the world around it?
 
 That's what every living creature from single cells upwards runs - a
 continuous
 sensory movie of the world around it, (although it took time to get to
 visual movies).
 
 That movie is clearly the central business of consciousness. And it is
 also
 clearly what evolution thought the absolute foundation of intelligence.
 Amazingly, it didn't listen to you guys, smart as you are - it didn't
 start
 with logic or mathematics or language, or anything that AI considers to
 be a
 sine qua non.
 
 That sensory movie is the absolute foundation of human intelligence too,
 the
 thing that never ever stops. Even you guys sometimes stop thinking in
 language, or maths, but your movie of the world around you never stops.
 And
 when you sleep, your mind keeps running movies of imagined worlds around
 you.
 Even then language and maths are at best only occasional participants in
 the
 movie.
 
 If you could stop thinking about just your computers, and start thinking
 -
 as you absolutely must here - about robots as well, then it is, I
 suggest,
 obvious that the first thing a robot needs is a world-movie .
 
 How can you survive in the world if you can't see the world, can't see
 where
 you need to go, or what's coming at you, (or which keys to push on your
 computer)? Even your fictional superAGI, if it were to be independent,
 would
 have to run a movie of the world around it, to protect itself from all
 the
 dangers of the world, like human programmers, bent on harming it, and
 ensure
 its supply of energy and other necessities.
 
 How too can you know about the world if you've never seen it, on-the-
 spot,
 firsthand, and in person through your free-roaming world-movie? (Sure I
 can. Wikipedia tells me everything I need to know about the world and
 life.
 Right).
 
 If you continue to think robotically, you also won't have any need to
 include high-falutin' forms of self-consciousness in your definitions of
 consciousness.
 
 Because that movie will obviously have to be an iworld-movie. Any
 robot or
 agent moving through the world must have a continuous sense of self -
 its
 integrated
 body, brain and sensors, rather than some homunculus - in relation to
 the
 world around it - and therefore
 a sense of itself watching the movie. There can be no consciousness
 WITHOUT
 self-consciousness - no world-movie without an I/eye/camera. You have to
 estimate continuously
 where things are in relation to yourself in order to shift your POV, or
 move
 this way or that, towards or away from things, as required.  So no
 high-falutin' yes, but who am I really - I mean deep down kind of
 self-consciousness is required, just the most basic kind, that even
 schizophrenics have.
 
 It should be obvious that consciousness is your world-movie - it's what
 you're looking at right now, the show that never stops. How could you
 survive in the real world, without that world-movie?
 
 But when you're incredibly smart, and rational, and logical, like AI-
 ers,
 and are deeply prejudiced against anything to do with movies or images
 or
 imagination let alone bodies,  you can't see the obvious - the
 couldn't-possibly-be-more-obvious.
 
 And you are prepared to
 settle for a totally sense-less, deaf-dumb-and-blind,  brain-in-a-vat
 conception of intelligence.
 
 Then you end up resorting to the most desperately contorted arguments to
 justify your senselessness  - I can see this one coming - ah but my
 cousin
 has been a total vegetable in a coma for the last twenty years, and he's
 conscious.
 

Mike,

The reason why people are thinking about all this stuff in terms of maths is
because it is not all just fluffy philosophizing you have to have at least
minimalistic math models in order to build software. So when you say
iTheathre or iMovie I'm thinking bits per send, compression, color depth,
Fourier transforms, object recognition probabilities,... sorry man that's
how it is.

Just saying that there is a movie projecting against the inside of your
skull ain't gonna cut the mustard. Movie is too broad you have to define
it more...

John






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: 

RE: [agi] Consciousness vs. Intelligence

2008-05-31 Thread John G. Rose
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 
 What many people call consciousness is qualia, that which distinguishes
 you from a philosophical zombie, http://en.wikipedia.org/wiki/P-zombie
 
 There is no test for consciousness in this sense, but humans universally
 believe that they are conscious, and this belief is testable.  Just ask
 someone.  Do you really feel pain, or do you just behave as if you feel
 it?
 
 The belief in experiencing qualia is what I call recursive episodic
 memory.  Episodic memory is the ability to recall a time sequence of
 events in the correct order.  These events could include earlier acts of
 recall.  For example, earlier today I recalled how yesterday a tune was
 playing in my head that I heard the day before (and so on).
 
 You probably do not remember any events that happened before you were 3
 years old.  You were clearly learning then, but it was not in episodic
 memory.  A person without a hippocampus lacks episodic memory.  He could
 learn new skills but wouldn't remember the lessons.  Episodic memory has
 been demonstrated in birds, but we do not know if it is recursive.
 
 I don't know if recursive episodic memory is necessary for intelligence.
 When I need to come up with an algorithm when writing software, it is
 useful to go through the steps in my head and then be able to recall my
 thought process.  It is also useful for databases to log read-only
 transactions.  It is useful for computers to copy recently read data to
 cache.
 
 However, recursive episodic memory could also be an artifact of the
 brain's memory management system.  Long term memory is written at a
 constant rate (about 2 bits per second, according to Landauer).  During
 quiet times, it has to write something.
 

People believe they are conscious. Why? Because they are. There is
camaraderie between conscious beings, especially ones of similar
consciousness like the same species. I believe that the guy walking down the
street looking at me thinking about mugging me(say it's NYC) is conscious.
Why? Because he believes that I am conscious and I might have a gun for
defense. It's self reinforcing. We analyze each other's conscious thoughts.
More than 2 agents and you get more effects and amplifications. 

Is there more than just a belief that we are conscious? Sure some rare
individuals can block pain. But when they do so they are actually blocking
the signals somehow or preventing their registration. There is a real pain
that is blocked. It's real. 

Consciousness is a system related to structure, information flow, etc.. and
there are different types and strengths. It's more than just a belief... and
belief may be a product of consciousness.

John



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Consciousness vs. Intelligence

2008-05-31 Thread Mike Tintner
John, The reason why people are thinking about all this stuff in terms of 
maths is

because it is not all just fluffy philosophizing you have to have at least
minimalistic math models in order to build software. So when you say
iTheathre or iMovie I'm thinking bits per send, compression, color depth,
Fourier transforms, object recognition probabilities,... sorry man that's
how it is.

Just saying that there is a movie projecting against the inside of your
skull ain't gonna cut the mustard. Movie is too broad you have to define
it more...

John,

Thanks for response. I apologise to you and others for going over the top, 
but I was deeply frustrated and right to be frustrated - with the 
scientific/philosophical world (and not just AI) re their  treatment of 
consciousness.


No, I believe I'm right here. Maths is only quantification - the question is 
: what are you quantifying? Programs are only recipes to construct something 
or a sequence of behaviour. The question again is: what are you 
constructing?


You have to START by providing a model of consciousness - of what it 
involves. It is grossly unscientific not to do so. And that movie is the 
sine qua non starting point for a model.


Certainly intelligence has to be applied to the movie - to understand what 
is being reflected in the movie - the objects and world around you.


And by all means quantify and program away - but first agree about what you 
are quantifying. Otherwise it's all basically hot air. And you guys - along 
with all other serious thinkers discussing this area - have NO AGREEMENT 
about what you are discussing, or whether any of you are talking about the 
same thing. That, if you think about it, is ridiculous.


Damasio who is one of the best thinkers here, talks  of consciousness as the 
movie in the mind - I believe that world movie is a step forward because 
it focusses on what the movie shows and is for.


Some perspective here: words are absolutely wrong and misleading as a SOLE 
medium to discuss consciousness - they fragment whatever you are talking 
about. And consciousness is indeed a continuous movie,  (for want of a still 
richer model) - and MUST be talked about with the aid of movies, as I tried 
to do.


The reason people resist this model - is they are only comfortable talking 
in words. They are uncomfortable and ill-versed thinking visually and 
sensorily. Well, tough. If you are serious about consciousness, there is no 
alternative. You cannot discuss actual movies in cinemas seriously with just 
words. Nor can you discuss the movie-that-is-consciousness seriously with 
just words either.


You guys, I am consistently arguing, have to learn respect for the brain. If 
the brain does things a certain way, then that is probably the ideal way 
to do it - in the technical, psychological sense - i.e. not the perfect 
way, not something that can't be improved, but a more or less inevitable and 
essential way, (in a very broad sense), to tackle the given problems.


And the way the brain chooses throughout evolution to tackle the problems of 
survival is to run a movie. Your brain does not allow you, for example, to 
jump straight to imbibing logic and mathematics - as your computers can - it 
forces you to run a movie of the books you're reading, or computer screen 
you're looking at, and all those logical and mathematical figures have to be 
processed as IMAGES. Your brain insists that you SEE or otherwise sense what 
you're talking about.


P.S. Qualia are a massive distraction here. People are ironically making the 
same mistake here that novice novelists make. When they start writing a 
story, they are obsessed with the FEELINGS of the events they describe. God, 
that love affair was so painful, you see. But actually you can't think 
straight about any events in your life, if you don't describe what you and 
other people are DOING... whether by actions, or thinking, or speech.  Once 
you concentrate on those, the feelings automatically fall into place.


The same is true with consciousness.  Model first what consciousness does. 
It sees etc - runs a movie - of the world.


P.P.S. As I pointed out, too, without a model of consciousness-as-movie the 
SELF wanders around, homeless, in any verbal discussion of consciousness.


With the movie model, the self automatically has a place - it's the 
brain-body watching the movie, continually directing the movie, turning the 
camera this way and that.


And then the contents of consciousness fall into place too - because every 
sight - every shot you see - including every photograph you look at - has 
a Point of View - inevitably implies a self watching at a distance. IOW 
every shot is a close-up, long distance, at an upward/downward angle 
from an implied viewer.


And all this is true of course for all animals.

So no there is no alternative to the movie model (only better or superior, 
modified versions) of it.  What alternative are you or anyone else offering?






Re: [agi] Consciousness vs. Intelligence

2008-05-31 Thread Matt Mahoney
--- On Sat, 5/31/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 But in future, there could be impostor agents that act like
 they have humanlike subjective experience but don't ... and we
 could uncover them by analyzing their internals...

What internal properties of a Turing machine distinguish one that has 
subjective experiences from an equivalent machine (implementing the same 
function) that only pretends to have subjective experience?

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Consciousness vs. Intelligence

2008-05-31 Thread John G. Rose
 From: Ben Goertzel [mailto:[EMAIL PROTECTED]
 
 If by conscious you mean having a humanlike subjective experience,
 I suppose that in future we will infer this about intelligent agents
 via a combination of observation of their behavior, and inspection of
 their internal construction and dynamics.
 

An N-like subjective experience where N is human, animal, bug, space
alien, god. I don't know if there needs to be an I since you could have a
distributed, decentralized I or other forms.

 But in future, there could be impostor agents that act like they have
 humanlike subjective experience but don't ... and we could uncover
 them by analyzing their internals...
 

If the imposters are good enough then they would be the same from a
functional perspective. And if they were the same eventually they would
improve their various attributes until we appeared as zombies to them and
they as godlike to us even though they were just imitating our
consciousness.

 This is under the assumption that subjective experience of an agent is
 correlated with (though not identical with) the patterns in the
 physical system serving as the substrate of that agent ... and that
 external behaviors only constitute a subset of these patterns...
 

Yes unless there is that complexity layer disconnect...


John




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Consciousness vs. Intelligence

2008-05-31 Thread John G. Rose
 From: Mike Tintner [mailto:[EMAIL PROTECTED]
 That's correct. The model of consciousness should be the self [brain-
 body]
 watching and physically interacting with the movie [that is in a sense
 an
 open movie - rather than on a closed screen - projected all over the
 world
 outside, and on the inside of the body]. The self is an integrated
 brain-body unit, acting and responding with the whole body.
 
 But you missed out the all-important part which I believe you're all
 skipping over. What is your or anyone else's model of consciousness?
 Which
 model are you using? Or do you know anyone else using? Or do you not
 have a
 model?
 
 You've been talking about consciousness - *what* have you been talking
 about? Honestly?
 

Mike,

Just because you have movies and theatres and are experiencing a simuworld
in your mind rolling around in the sand feeling all warm and fuzzy doesn't
make it work. There are a few quantitative systems relationships that need
to be strictly defined in order to have a model that isn't just a bunch of
ideas slapped together that sound good to a philosophy student. The system
has to come together in such a way that it functions and operates like a
machine or a machine derivative so that you can actually build it within a
lifetime. There are consciousness patterns. And there are consciousness
inkblots. 

John



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Consciousness vs. Intelligence

2008-05-31 Thread Matt Mahoney
--- On Sat, 5/31/08, Ben Goertzel [EMAIL PROTECTED] wrote:

I wrote:
  What internal properties of a Turing machine
  distinguish one that has subjective experiences from an
  equivalent machine (implementing the same function) that
  only pretends to have subjective experience?
 
 
 You're asking a different question.
 
 What I said was that internal properties could distinguish
 
 a) a machine having HUMANLIKE subjective experiences
 
 from
 
 b) a machine just claiming to have HUMANLIKE subjective
 experiences, but not really having them

The reason I ask is because humans and human-like intelligence occupies a tiny 
region in the huge space of possible intelligence.  If only humanlike 
subjective experience is important, then the definition is easy.  Subjective 
experience is something humans have, and nothing else.  End of argument.

I am looking for a more general principle.  On SL4 I proposed that an agent A 
receiving sensory input x at time t has subjective experience

s(x|A) = K(A(t+)|A(t-))

where K is Kolmogorov complexity, A(t-) is the state of A immediately before 
input x and A(t+) is the state immediately afterwards.  In other words, it the 
length (in bits) of the shortest program that takes as input a description of A 
prior to input x and outputs a description of A after input x.  It measures how 
much A remembers about x, independent of K(x).

In your book you mentioned the intensity of a pattern and gave a definition 
that included a lossy compression term (i.e. the part of x that A ignored).  
Note that in general,

s(x|A) = K(x|A(t-))

where the difference is the number of bits that A ignored.

This definition makes no distinction between having subjective experience and 
pretending to have it.  It also makes no distinction between humanlike 
subjective experience and any other kind.

By this definition, your computer can already have 10^12 bits of subjective 
experience, far more than the 10^9 bits of human long term memory estimated by 
Landauer.

I do not mean to imply any ethical considerations by this.  It is easy to 
confuse conscious entities (those having subjective experience) with entities 
that have rights or require compassion.  The human ethical model has no such 
requirement.  Ethics is an evolved function that selects for group fitness.  
You are compassionate to other humans because it increases the odds of passing 
on your genes.  If all conscious entities were worthy of compassion, we would 
not have wars or eat meat.  This leads us back to our original definition...


-- Matt Mahoney, [EMAIL PROTECTED]





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Consciousness vs. Intelligence

2008-05-31 Thread Matt Mahoney
--- On Sat, 5/31/08, John G. Rose [EMAIL PROTECTED] wrote:

  From: Matt Mahoney [mailto:[EMAIL PROTECTED]

  I don't believe you are conscious.  I believe you
  are a zombie.  Prove me wrong.
 
 I am a zombie. Prove to me that I am not. Otherwise I will
 accuse you of being conscious.

Exactly my point.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Consciousness vs. Intelligence

2008-05-31 Thread John G. Rose
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 --- On Sat, 5/31/08, John G. Rose [EMAIL PROTECTED] wrote:
 
   From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 
   I don't believe you are conscious.  I believe you
   are a zombie.  Prove me wrong.
 
  I am a zombie. Prove to me that I am not. Otherwise I will
  accuse you of being conscious.
 
 Exactly my point.
 

Just because you communicate through the limited bandwidth medium of text
doesn't prove anything. You could make a smart zombie chatterbot and make it
irritating enough where it is very terse and just fleeting enough where you
never figure it out. So what? 

A problem though is eventually even a CAPTCHA won't work to filter these
things. CAPTCHA's will evolve past character recognition to image
recognition, it'll be interesting. What do you see in this image, a pony or
a donkey. The bots will get smarter. And CAPTCHA's will ask even more
humanlike questions.

Does that have anything to do with consciousness? What's the test for
consciousness?

John



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Consciousness vs. Intelligence

2008-05-31 Thread John G. Rose
 From: Mike Tintner [mailto:[EMAIL PROTECTED]
 
 you utterly refused to answer my question re: what is your model? It's
 not a
 hard question to start answering - i.e. either you do have some kind of
 model or you don't. You simply avoided it. Again.


I have some models that I feel confident that would work yet they are
basically insectoidal level consciousness. When I get to more sophisticated
models the confidence that I have in them is not enough and the system is
difficult to test just by thinking about it, and they are incomplete. So...
still working on that. But I do have higher level intelligence models that I
am confident in.


 And a major point I'm making is that - everyone is doing that.  Everyone
 is
 picking some very limited aspect of consciousness that is important to
 them - experience, qualia,  self-consciousness. And no one has or
 is
 offering a model of the whole.
 

I haven't read up enough on other models to concur on this.


 But we can and must produce a model and point to what we're talking
 about.
 
 Here is someone who is conscious:
 
 http://www.bized.co.uk/images/man_remote.jpg
 
 We can observe consciousness from the outside with such a picture (or
 better
 still clip) - and point to those parts of him that are conscious and how
 and
 what parts of him like nerves, produce that consciousness .
 
 And we can model his consciousness from the inside:
 
 http://electrojusa.iespana.es/images/philips_25pt_7304_television__47429
 .jpg
 http://www.engadgetmobile.com/media/2007/01/sch-u620-hands-on-2.jpg
 
 what he's seeing and hearing etc.
 
 A properly defined model - including my movie model - should include all
 those things.


When you describe this you have to be careful how much computation your mind
is doing and taking for granted. You make many assumptions just by looking
at the pic and saying these are signs that this man is conscious. And saying
that a handheld TV is some sort of model, ya that's making massive
assumptions and shortcuts to the point of assuming that 99% of your model
already exists where it doesn't; you're just pointing at data feeds and
neglecting numerous other details which are the most important.

John






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Consciousness vs. Intelligence

2008-05-31 Thread Mike Tintner
John:When you describe this you have to be careful how much computation your 
mind

is doing and taking for granted. You make many assumptions just by looking
at the pic and saying these are signs that this man is conscious. And saying
that a handheld TV is some sort of model, ya that's making massive
assumptions and shortcuts to the point of assuming that 99% of your model
already exists where it doesn't; you're just pointing at data feeds and
neglecting numerous other details which are the most important.

John,

You are - if I've understood you - talking about the machinery and 
programming that produce and help to process the movie of  consciousness.


I'm not in any way denying all that or its complexity. But the first thing 
is to define and model consciousness, before you work out that machinery 
etc.


It may sound simple to you, because you're naturally enamoured of the 
programming/machinery part, and trying to work out sophisticated programming 
etc may sound much smarter and more exciting.


But actually starting with that simple model is much more important. If 
you don't know, or only half know, or are radically confused about, what 
you're trying to explain, all the technical ideas you may come up with will 
be worthless.


And that's the same mistake people are making with AGI generally - no one 
has a model of what general intelligence involves, or of the kind of 
problems it must solve - what it actually DOES - and everyone has left that 
till later, and is instead busy with all the technical programming that they 
find exciting - with the how it works side -  without knowing whether 
anything they're doing is really necessary or relevant..





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Consciousness vs. Intelligence

2008-05-31 Thread Matt Mahoney
--- On Sat, 5/31/08, John G. Rose [EMAIL PROTECTED] wrote:

  From: Matt Mahoney [mailto:[EMAIL PROTECTED]
  --- On Sat, 5/31/08, John G. Rose
 [EMAIL PROTECTED] wrote:
  
From: Matt Mahoney
 [mailto:[EMAIL PROTECTED]
  
I don't believe you are conscious.  I believe you
are a zombie.  Prove me wrong.
  
   I am a zombie. Prove to me that I am not. Otherwise I will
   accuse you of being conscious.
  
  Exactly my point.
  
 
 Just because you communicate through the limited bandwidth
 medium of text doesn't prove anything. You could make a smart zombie
 chatterbot and make it
 irritating enough where it is very terse and just fleeting
 enough where you never figure it out. So what? 

My point is that you can't tell if a person is conscious (has experience) or a 
zombie (learns and has memory, but no qualia).  I think you are talking about 
something else.  By zombie, I mean 
http://en.wikipedia.org/wiki/Philosophical_zombie

 A problem though is eventually even a CAPTCHA won't work to filter these
 things. CAPTCHA's will evolve past character recognition to image
 recognition, it'll be interesting. What do you see in this image, 
 a pony or a donkey. The bots will get smarter. And CAPTCHA's will
 ask even more humanlike questions.

Unfortunately AI will make CAPTCHAs useless against spammers.  We will need to 
figure out other methods.  I expect that when we have AI, most of the world's 
computing power is going to be directed at attacking other computers and 
defending against attacks.  It is no different than evolution.  A competitive 
environment makes faster rabbits and faster foxes.  Without hostility, why 
would we need such large brains?

 Does that have anything to do with consciousness?
 What's the test for consciousness?

I think the way you are using the term, it is the Turing test.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Consciousness vs. Intelligence

2008-05-29 Thread John G. Rose
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 --- John G. Rose [EMAIL PROTECTED] wrote:
 
  Consciousness with minimal intelligence may be easier to build than
 general
  intelligence. General intelligence is the one that takes the
 resources.
  A general consciousness algorithm, one that creates a consciousness in
 any
  environment may be simpler that a general intelligence algorithm that
  acquires intelligence in any environment. The two can go hand in hand
  but one can be minimized against the other. But I don't understand the
  relationship between consciousness and intelligence. I want to say
 that
  they are like disjoint vectors but that doesn't seem right...
 
 You need to define your terms.  What properties of an algorithm make it
 conscious?  What properties make it intelligent?  To some people, the
 two
 terms are equivalent.  To others, consciousness does not exist.
 
 

How can the two terms be equivalent? Some may think that they are
inseparable, or that one cannot exist without the other, I can understand
that perspective. But there is a quantitative relationship between the two.
When you get into strict definitions people get alienated...

john



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Consciousness vs. Intelligence

2008-05-29 Thread Vladimir Nesov
On Thu, May 29, 2008 at 6:41 PM, John G. Rose [EMAIL PROTECTED] wrote:

 How can the two terms be equivalent? Some may think that they are
 inseparable, or that one cannot exist without the other, I can understand
 that perspective. But there is a quantitative relationship between the two.
 When you get into strict definitions people get alienated...


For me, working meaning of consciousness is reflection, or a process
of memory-formation about the processes going in the mind, which is
the same thing as learning, since any kind of external information
must first set in motion a process in the mind in order to be
perceived. By intelligence, to separate it from knowledge, I
understand the efficiency of learning. Thus, it can be said that in my
definitions intelligence is a property of consciousness, but it's
really unnecessarily confusing to use these way overloaded terms, and
it's almost meaningless to use them without clarification of what's
meant in a particular case.

-- 
Vladimir Nesov
[EMAIL PROTECTED]


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Consciousness vs. Intelligence

2008-05-28 Thread Matt Mahoney

--- John G. Rose [EMAIL PROTECTED] wrote:

 Consciousness with minimal intelligence may be easier to build than
general
 intelligence. General intelligence is the one that takes the resources.
 A general consciousness algorithm, one that creates a consciousness in
any
 environment may be simpler that a general intelligence algorithm that
 acquires intelligence in any environment. The two can go hand in hand
 but one can be minimized against the other. But I don't understand the
 relationship between consciousness and intelligence. I want to say that
 they are like disjoint vectors but that doesn't seem right...

You need to define your terms.  What properties of an algorithm make it
conscious?  What properties make it intelligent?  To some people, the two
terms are equivalent.  To others, consciousness does not exist.


-- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com