Re: [agi] Chaogate chips: Yum!

2008-11-14 Thread Olie Lamb
Mmmm... Chaoglate-chip cookie processing!

On 11/6/08, Richard Loosemore [EMAIL PROTECTED] wrote:

 A report about research to build chaotic logic:

 http://technology.newscientist.com/article/mg20026801.800


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-14 Thread Jiri Jelinek
On Fri, Nov 14, 2008 at 2:07 AM, John G. Rose [EMAIL PROTECTED] wrote:
there are many computer systems now, domain specific intelligent ones where 
their life is more
important than mine. Some would say that the battle is already lost.

For now, it's not really your life (or interest) vs the system's life
(or interest). It's rather your life (or interest) vs lives (or
interests) of people the system protects/supports. Our machines still
work for humans. At least it still seems to be the case ;-)). If we
are stupid enough to develop very powerful machines without equally
powerful safety controls then we (just like many other species) are
due for extinction for adaptability limitations.

Regards,
Jiri Jelinek


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] Ethics of computer-based cognitive experimentation

2008-11-14 Thread John G. Rose
 From: Jiri Jelinek [mailto:[EMAIL PROTECTED]
 On Fri, Nov 14, 2008 at 2:07 AM, John G. Rose [EMAIL PROTECTED]
 wrote:
 there are many computer systems now, domain specific intelligent ones
 where their life is more
 important than mine. Some would say that the battle is already lost.
 
 For now, it's not really your life (or interest) vs the system's life
 (or interest). It's rather your life (or interest) vs lives (or
 interests) of people the system protects/supports. Our machines still
 work for humans. At least it still seems to be the case ;-)). If we
 are stupid enough to develop very powerful machines without equally
 powerful safety controls then we (just like many other species) are
 due for extinction for adaptability limitations.
 

It is where the interests of others is more valuable than an individual's
life. Ancient Rome had the entertainment interests of the masses at a higher
value than those being devoured by lions in the arena. I would say that
computers and machines interests today in many cases now are of similar
relational circumstances in some cases.

Our herd mentality makes it easy for rights to be taken away and at the same
time it is accepted and defended as necessary and an improvement. Example -
anonymity and privacy = gone. Sounds paranoiacal but there are many that
agree on this.

It is an icky subject, easy to ignore, and perhaps something that hinders
technological progression.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Chaogate chips: Yum!

2008-11-14 Thread Eric Burton
HATED IT

On 11/14/08, Olie Lamb [EMAIL PROTECTED] wrote:
 Mmmm... Chaoglate-chip cookie processing!

 On 11/6/08, Richard Loosemore [EMAIL PROTECTED] wrote:

 A report about research to build chaotic logic:

 http://technology.newscientist.com/article/mg20026801.800


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Why consciousness is hard to define (was Re: [agi] Ethics of computer-based cognitive experimentation)

2008-11-14 Thread Matt Mahoney
--- On Fri, 11/14/08, Colin Hales [EMAIL PROTECTED] wrote:
Try running yourself with empirical results instead of metabelief
(belief about belief). You'll get someplace .i.e. you'll resolve the
inconsistencies. When inconsistencies are testably absent, no
matter how weird the answer, it will deliver maximally informed
choices. Not facts. Facts will only ever appear differently after
choices are made. This too is a fact...which I have chosen to make
choices about. :-) If you fail to resolve your inconsistency then you
are guaranteeing that your choices are minimally informed.

Fine. By your definition of consciousness, I must be conscious because I can 
see and because I can apply the scientific method, which you didn't precisely 
define, but I assume that means I can do experiments and learn from them.

But by your definition, a simple modification to autobliss ( 
http://www.mattmahoney.net/autobliss.txt ) would make it conscious. It already 
applies the scientific method. It outputs 3 bits (2 randomly picked inputs to 
an unknown logic gate and a proposed output) and learns the logic function. The 
missing component is vision. But suppose I replace the logic function (a 4 bit 
value specified by the teacher) with a black box with 3 switches and a light 
bulb to indicate whether the proposed output (one of the switches) is right or 
wrong. You also didn't precisely define what constitutes vision, so I assume a 
1 pixel system qualifies.

Of course I don't expect anyone to precisely define consciousness (as a 
property of Turing machines). There is no algorithmically simple definition 
that agrees with intuition, i.e. that living humans and nothing else are 
conscious. This goes beyond Rice's theorem, which would make any nontrivial 
definition not computable. Even allowing non computable definitions (the output 
can be yes, no, or maybe), you still have the problem that any 
specification with algorithmic complexity K can be expressed as a program with 
complexity K. Given any simple specification (meaning K is small) I can write a 
simple program that satisfies it (my program has complexity at most K). 
However, for humans, K is about 10^9 bits. That means any specification smaller 
than a 1 GB file or 1000 books would allow a counter intuitive example of a 
simple program that meets your test for consciousness.

Try it if you don't believe me. Give me a simple definition of consciousness 
without pointing to a human (like the Turing test does). I am looking for a 
program is_conscious(x) shorter than 10^9 bits that inputs a Turing machine x 
and outputs yes, no, or maybe.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


[agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Richard Loosemore


I completed the first draft of a technical paper on consciousness the 
other day.   It is intended for the AGI-09 conference, and it can be 
found at:


http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf

The title is Consciousness in Human and Machine: A Theory and Some 
Falsifiable Predictions, and it does solve the problem, believe it or not.


But I have no illusions:  it will be misunderstood, at the very least. 
I expect there will be plenty of people who argue that it does not solve 
the problem, but I don't really care, because I think history will 
eventually show that this is indeed the right answer.  It gives a 
satisfying answer to all the outstanding questions and it feels right.


Oh, and it does make some testable predictions.  Alas, we do not yet 
have the technology to perform the tests yet, but the predictions are on 
the table, anyhow.


In a longer version I would go into a lot more detail, introducing  the 
background material at more length, analyzing the other proposals that 
have been made and fleshing out the technical aspects along several 
dimensions.  But the size limit for the conference was 6 pages, so that 
was all I could cram in.






Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Derek Zahn

Richard,
 
As a general rule, I find discussions about consciousness, qualia, and so forth 
to be unhelpful, frustrating, and unnecessary.  However, I enjoyed this paper a 
great deal.  Thanks for writing it.  Because of my inclinations on these 
matters, I am not an expert on the history of thought on the topic, or its 
current status among philosophers, but I find your account to be credible and 
reasonably clear.  I'm not particularly repulsed by the idea that ... our most 
immediate, subjective experiance of the world is, in some sense, an artifact 
produced by the operation of the brain so searching for a more satisfying 
conclusion is not really high up on my priority list.  Still, I don't see 
anything immediately objectionable in your analysis.
 
I am not certain about the distinguishing power of your falsifiable 
predictions, but only because I would need to give that considerably more 
thought.
 
I look forward to being in the audience when you present the paper at AGI-09.
 
Derek Zahn
agiblog.net


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Derek Zahn

Oh, one other thing I forgot to mention.  To reach my cheerful conclusion about 
your paper, I have to be willing to accept your model of cognition.  I'm pretty 
easy on that premise-granting, by which I mean that I'm normally willing to 
go along with architectural suggestions to see where they lead.  But I will be 
curious to see whether others are also willing to go along with you on your 
generic  cognitive system model.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Robert Swaine
Conciousness is akin to the phlogiston theory in chemistry.  It is likely a 
shadow concept, similar to how the bodily reactions make us feel that the heart 
is the seat of emotions.  Gladly, cardiologist and heart surgeons do not look 
for a spirit, a soul, or kindness in the heart muscle.  The brain organ need 
not contain anything beyond the means to effect physical behavior,.. and 
feedback as to those behavior.

A finite degree of sensory awareness serves as a suitable replacement for 
consciousness, in otherwords, just feedback.

Would it really make a difference if we were all biological machines, and our 
perceptions were the same as other animals, or other designed minds; more so 
if we were in a simulated existence.  The search for consciousness is a 
misleading (though not entirely fruitless) path to AGI.


--- On Fri, 11/14/08, Richard Loosemore [EMAIL PROTECTED] wrote:

 From: Richard Loosemore [EMAIL PROTECTED]
 Subject: [agi] A paper that actually does solve the problem of consciousness
 To: agi@v2.listbox.com
 Date: Friday, November 14, 2008, 12:27 PM
 I completed the first draft of a technical paper on
 consciousness the 
 other day.   It is intended for the AGI-09 conference, and
 it can be 
 found at:
 
 http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf
 
 The title is Consciousness in Human and Machine: A
 Theory and Some 
 Falsifiable Predictions, and it does solve the
 problem, believe it or not.
 
 But I have no illusions:  it will be misunderstood, at the
 very least. 
 I expect there will be plenty of people who argue that it
 does not solve 
 the problem, but I don't really care, because I think
 history will 
 eventually show that this is indeed the right answer.  It
 gives a 
 satisfying answer to all the outstanding questions and it
 feels right.
 
 Oh, and it does make some testable predictions.  Alas, we
 do not yet 
 have the technology to perform the tests yet, but the
 predictions are on 
 the table, anyhow.
 
 In a longer version I would go into a lot more detail,
 introducing  the 
 background material at more length, analyzing the other
 proposals that 
 have been made and fleshing out the technical aspects along
 several 
 dimensions.  But the size limit for the conference was 6
 pages, so that 
 was all I could cram in.
 
 
 
 
 
 Richard Loosemore
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Vladimir Nesov
Some notes/review.

Whether AGI is conscious is independent from whether it'll
rebel/be dangerous. Answering any kind of question about
consciousness doesn't answer a question about safety.

How is the situation with p-zombies atom-by-atom identical to
conscious beings not resolved by saying that in this case
consciousness is an epiphenomenon, meaninglessness?
http://www.overcomingbias.com/2008/04/zombies.html
http://www.overcomingbias.com/2008/04/zombies-ii.html
http://www.overcomingbias.com/2008/04/anti-zombie-pri.html

Jumping into molecular framework as describing human cognition is
unwarranted. It could be a description of AGI design, or it could be a
theoretical description of more general epistemology, but as presented
it's not general enough to automatically correspond to the brain.
Also, semantics of atoms is tricky business, for all I know it keeps
shifting with the focus of attention, often dramatically. Saying that
self is a cluster of atoms doesn't cut it.

Bottoming out of explanation of experience is a good answer, but you
don't need to point to specific moving parts of a specific cognitive
architecture to give it (I don't see how it helps with the argument).
If you have a belief (generally, a state of mind), it may indicate
that the world has a certain property, that world having that property
caused you to have this belief, or it can indicate that you have a
certain cognitive quirk that caused this belief, a loophole in
cognition. There is always a cause, the trick is in correctly
dereferencing the belief.
http://www.overcomingbias.com/2008/03/righting-a-wron.html

Subjective phenomena might be unreachable for meta-introspection, but
it doesn't place them on different level, making them unanalyzeable,
you can in principle inspect them from outside, using tools other then
one's mind itself. You yourself just presented a model of what's
happening.

Meaning/information is relative, it can be represented within a basis,
for example within a mind, and communicated to another mind. Like
speed, it has no absolute, but the laws of relativity, of conversion
between frames of reference, between minds, are precise and not
arbitrary. Possible-worlds semantics is one way to establish a basis,
allowing to communicate concepts, but maybe not a very good one.
Grounding in common cognitive architecture is probably a good move,
but it doesn't have fundamental significance.

Predictions are not described carefully enough to appear as
following from your theory. They use some terminology, but on a level
that allows literal translation to a language of perceptual wiring,
with correspondence between qualia and areas implementing
modalities/receiving perceptual input.

You didn't argue about a general case of AGI, so how does it follow
that any AGI is bound to be conscious?

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Richard Loosemore

Derek Zahn wrote:
Oh, one other thing I forgot to mention.  To reach my cheerful 
conclusion about your paper, I have to be willing to accept your model 
of cognition.  I'm pretty easy on that premise-granting, by which I 
mean that I'm normally willing to go along with architectural 
suggestions to see where they lead.  But I will be curious to see 
whether others are also willing to go along with you on your generic  
cognitive system model.




That's an interesting point.

In fact, the argument doesn't change too much if we go to other models 
of cognition, it just looks different ... and more complicated, which is 
partly why I wanted to stick with my own formalism.


The crucial part is that there has to be a very powerful mechanism that 
lets the system analyze its own concepts - it has to be able to reflect 
on its own knowledge in a very recursive kind of way.  Now, I think that 
Novamente, OpenCog and other systems will eventually have that sort of 
capability because it is such a crucial part of the general bit in 
artificial general intelligence.


Once a system has that mechanism, I can use it to take the line I took 
in the paper.


Also, the generic model of cognition was useful to me in the later part 
of the paper where I want to analyze semantics.  Other AGI architectures 
(logical ones for example) implicitly stick with the very strict kinds 
of semantics (possible worlds, e.g.) that I actually think cannot be 
made to work for all of cognition.


Anyhow, thanks for your positive comments.



Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Richard Loosemore

Robert Swaine wrote:

Conciousness is akin to the phlogiston theory in chemistry.  It is
likely a shadow concept, similar to how the bodily reactions make us
feel that the heart is the seat of emotions.  Gladly, cardiologist
and heart surgeons do not look for a spirit, a soul, or kindness in
the heart muscle.  The brain organ need not contain anything beyond
the means to effect physical behavior,.. and feedback as to those
behavior.

A finite degree of sensory awareness serves as a suitable replacement
for consciousness, in otherwords, just feedback.

Would it really make a difference if we were all biological machines,
and our perceptions were the same as other animals, or other
designed minds; more so if we were in a simulated existence.  The
search for consciousness is a misleading (though not entirely
fruitless) path to AGI.


Well, with respect, it does sound as though you did not read the paper
itself, or any of the other books like Chalmers' Conscious Mind.

I say this because there are lengthy (and standard) replies to the 
points that you make, both in the paper and in the literature.


And, please don't misunderstand: this is not a path to AGI.  Just an 
important side issue that the geneal public cares about enormously.




Richard Loosemore



--- On Fri, 11/14/08, Richard Loosemore [EMAIL PROTECTED] wrote:


From: Richard Loosemore [EMAIL PROTECTED] Subject: [agi] A paper
that actually does solve the problem of consciousness To:
agi@v2.listbox.com Date: Friday, November 14, 2008, 12:27 PM I
completed the first draft of a technical paper on consciousness the
 other day.   It is intended for the AGI-09 conference, and it can
be found at:

http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf


The title is Consciousness in Human and Machine: A Theory and Some
 Falsifiable Predictions, and it does solve the problem, believe
it or not.

But I have no illusions:  it will be misunderstood, at the very
least. I expect there will be plenty of people who argue that it 
does not solve the problem, but I don't really care, because I

think history will eventually show that this is indeed the right
answer.  It gives a satisfying answer to all the outstanding
questions and it feels right.

Oh, and it does make some testable predictions.  Alas, we do not
yet have the technology to perform the tests yet, but the 
predictions are on the table, anyhow.


In a longer version I would go into a lot more detail, introducing
the background material at more length, analyzing the other 
proposals that have been made and fleshing out the technical

aspects along several dimensions.  But the size limit for the
conference was 6 pages, so that was all I could cram in.





Richard Loosemore


--- agi Archives:
https://www.listbox.com/member/archive/303/=now RSS Feed:
https://www.listbox.com/member/archive/rss/303/ Modify Your
Subscription: https://www.listbox.com/member/?; Powered by Listbox:
http://www.listbox.com




--- agi Archives:
https://www.listbox.com/member/archive/303/=now RSS Feed:
https://www.listbox.com/member/archive/rss/303/ Modify Your
Subscription:
https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Vladimir Nesov
(I'm sorry that I make some unclear statements on semantics/meaning,
I'll probably get to the description of this perspective later on the
blog (or maybe it'll become obsolete before that), but it's a long
story, and writing it up on the spot isn't an option.)

On Sat, Nov 15, 2008 at 2:18 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Taking the position that consciousness is an epiphenomenon and is therefore
 meaningless has difficulties.

Rather p-zombieness in atom-by-atom the same environment is an epiphenomenon.


 By saying that it is an epiphenomenon, you actually do not answer the
 questions about instrinsic qualities and how they relate to other things in
 the universe.  The key point is that we do have other examples of
 epiphenomena (e.g. smoke from a steam train),

What do you mean by smoke being epiphenomenal?

 but their ontological status
 is very clear:  they are things in the world.  We do not know of other
 things with such puzzling ontology (like consciousness), that we can use as
 a clear analogy, to explain what consciousness is.

 Also, it raises the question of *why* there should be an epiphenomenon.
  Calling it an E does not tell us why such a thing should happen.  And it
 leaves us in the dark about whether or not to believe that other systems
 that are not atom-for-atom identical with us, should also have this
 epiphenomenon.

I don't know how to parse the word epiphenomenon in this context. I
use to to describe reference-free, meaningless concepts, so you can't
say that some epiphenomenon is present here or there, that would be
meaningless.


 Jumping into molecular framework as describing human cognition is
 unwarranted. It could be a description of AGI design, or it could be a
 theoretical description of more general epistemology, but as presented
 it's not general enough to automatically correspond to the brain.
 Also, semantics of atoms is tricky business, for all I know it keeps
 shifting with the focus of attention, often dramatically. Saying that
 self is a cluster of atoms doesn't cut it.

 I'm not sure of what you are saying, exactly.

 The framework is general in this sense:  its components have *clear*
 counterparts in all models of cognition, both human and machine.  So, for
 example, if you look at a system that uses logical reasoning and bare
 symbols, that formalism will differentiate between the symbols that are
 currently active, and playing a role in the system's analysis of the world,
 and those that are not active.  That is the distinction between foreground
 and background.

Without a working, functional theory of cognition, this high-level
descriptive picture has little explanatory power. It might be a step
towards developing a useful theory, but it doesn't explain anything.
There is a set of states of mind that correlates with experience of
apples, etc. So what? You can't build a detailed edifice on general
principles and claim that far-reaching conclusions apply to actual
brain. They might, but you need a semantic link from theory to
described functionality.


 As for the self symbol, there was no time to go into detail.  But there
 clearly is an atom that represents the self.

*shug*
It only stands as definition, there is no self-neuron, or something
easily identifiable as self, it's a complex thing. I'm not sure I
even understand what self refers to subjectively, I don't feel any
clear focus of self-perception, my experience is filled with thoughts
on many things, some of them involving management of thought process,
some of external concepts, but no unified center to speak of...


 Bottoming out of explanation of experience is a good answer, but you
 don't need to point to specific moving parts of a specific cognitive
 architecture to give it (I don't see how it helps with the argument).
 If you have a belief (generally, a state of mind), it may indicate
 that the world has a certain property, that world having that property
 caused you to have this belief, or it can indicate that you have a
 certain cognitive quirk that caused this belief, a loophole in
 cognition. There is always a cause, the trick is in correctly
 dereferencing the belief.
 http://www.overcomingbias.com/2008/03/righting-a-wron.html

 Not so fast.  There are many different types of mistaken beliefs. Most of
 these are so shallow that they could not possibly explain the
 characteristics of consciousness that need to be explained.

 And, as I point out in the second part, it is not at all clear that this
 particular issue can be given the status of mistaken or failure.  It
 simply does not fit with all the other known examples of failures of the
 cognitive system, such as hallucinations, etc.

 I thin it would be intellectually dishonest to try to sweep it under the rug
 with those other things, because those are clearly breakdowns that, with a
 little care, could all be avoided.  But this issue is utterly different:  by
 making the argument that I did, I think I showed that it was 

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Matt Mahoney
--- On Fri, 11/14/08, Richard Loosemore [EMAIL PROTECTED] wrote:
 http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf

Interesting that some of your predictions have already been tested, in 
particular, synaesthetic qualia was described by George Stratton in 1896. When 
people wear glasses that turn images upside down, they adapt after several days 
and begin to see the world normally.

http://www.cns.nyu.edu/~nava/courses/psych_and_brain/pdfs/Stratton_1896.pdf
http://wearcam.org/tetherless/node4.html

This is equivalent to your prediction #2 where connecting the output of neurons 
that respond to the sound of a cello to the input of neurons that respond to 
red would cause a cello to sound red. We should expect the effect to be 
temporary.

I'm not sure how this demonstrates consciousness. How do you test that the 
subject actually experiences redness at the sound of a cello, rather than just 
behaving as if experiencing redness, for example, claiming to hear red?

I can do a similar experiment with autobliss (a program that learns a 2 input 
logic function by reinforcement). If I swapped the inputs, the program would 
make mistakes at first, but adapt after a few dozen training sessions. So 
autobliss meets one of the requirements for qualia. The other is that it be 
advanced enough to introspect on itself, and that which it cannot analyze 
(describe in terms of simpler phenomena) is qualia. What you describe as 
elements are neurons in a connectionist model, and the atoms are the set of 
active neurons. Analysis means describing a neuron in terms of its inputs. 
Then qualia is the first layer of a feedforward network. In this respect, 
autobliss is a single neuron with 4 inputs, and those inputs are therefore its 
qualia.

You might object that autobliss is not advanced enough to ponder its own self 
existence. Perhaps you define advanced to mean it is capable of language 
(pass the Turing test), but I don't think that's what you meant. In that case, 
you need to define more carefully what qualifies as sufficiently powerful.


-- Matt Mahoney, [EMAIL PROTECTED]





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Ben Goertzel
Richard,

In your paper you say


The argument does not
say anything about the nature of conscious experience, qua
subjective experience, but the argument does say why it
cannot supply an explanation of subjective experience. Is
explaining why we cannot explain something the same as
explaining it?


I think it isn't the same...

The problem is that there may be many possible explanations for why we can't
explain consciousness.  And it seems there is no empirical way to decide
among these explanations.  So we need to decide among them via some sort of
metatheoretical criteria -- Occam's Razor, or conceptual consistency with
our scientific ideas, or some such.  The question for you then is, why is
yours the best explanation of why we can't explain consciousness?

But I have another confusion about your argument.  I understand the idea
that a mind's analysis process has eventually got to bottom out somewhere,
so that it will describe some entities using descriptions that are (from its
perspective) arbitrary and can't be decomposed any further.  These
bottom-level entities could be sensations or they could be sort-of arbitrary
internal tokens out of which internal patterns are constructed

But what do you say about the experience of being conscious of a chair,
then?  Are you saying that the consciousness I have of the chair is the
*set* of all the bottom-level unanalyzables into which the chair is
decomposed by my mind?

ben


On Fri, Nov 14, 2008 at 11:44 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 --- On Fri, 11/14/08, Richard Loosemore [EMAIL PROTECTED] wrote:
 
 http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf

 Interesting that some of your predictions have already been tested, in
 particular, synaesthetic qualia was described by George Stratton in 1896.
 When people wear glasses that turn images upside down, they adapt after
 several days and begin to see the world normally.

 http://www.cns.nyu.edu/~nava/courses/psych_and_brain/pdfs/Stratton_1896.pdfhttp://www.cns.nyu.edu/%7Enava/courses/psych_and_brain/pdfs/Stratton_1896.pdf
 http://wearcam.org/tetherless/node4.html

 This is equivalent to your prediction #2 where connecting the output of
 neurons that respond to the sound of a cello to the input of neurons that
 respond to red would cause a cello to sound red. We should expect the effect
 to be temporary.

 I'm not sure how this demonstrates consciousness. How do you test that the
 subject actually experiences redness at the sound of a cello, rather than
 just behaving as if experiencing redness, for example, claiming to hear red?

 I can do a similar experiment with autobliss (a program that learns a 2
 input logic function by reinforcement). If I swapped the inputs, the program
 would make mistakes at first, but adapt after a few dozen training sessions.
 So autobliss meets one of the requirements for qualia. The other is that it
 be advanced enough to introspect on itself, and that which it cannot analyze
 (describe in terms of simpler phenomena) is qualia. What you describe as
 elements are neurons in a connectionist model, and the atoms are the set
 of active neurons. Analysis means describing a neuron in terms of its
 inputs. Then qualia is the first layer of a feedforward network. In this
 respect, autobliss is a single neuron with 4 inputs, and those inputs are
 therefore its qualia.

 You might object that autobliss is not advanced enough to ponder its own
 self existence. Perhaps you define advanced to mean it is capable of
 language (pass the Turing test), but I don't think that's what you meant. In
 that case, you need to define more carefully what qualifies as sufficiently
 powerful.


 -- Matt Mahoney, [EMAIL PROTECTED]





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.  -- Robert Heinlein



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com