Re: [agi] Doubts raised over brain scan findings

2009-01-15 Thread Richard Loosemore

Vladimir Nesov wrote:

On Thu, Jan 15, 2009 at 4:34 AM, Richard Loosemore r...@lightlink.com wrote:

Vladimir Nesov wrote:

On Thu, Jan 15, 2009 at 3:03 AM, Richard Loosemore r...@lightlink.com
wrote:

The whole point about the paper referenced above is that they are
collecting
(in a large number of cases) data that is just random noise.


So what? The paper points out a methodological problem that in itself
has little to do with neuroscience.

Not correct at all:  this *is* neuroscience.  I don't understand why you say
that it is not.



From what I got from the abstract and by skimming the paper, it's a

methodological problem in handling data from neuroscience experiments
(bad statistics).


The field as a whole is hardly
mortally afflicted with that problem

I mentioned it because there is a context in which this sits.  The context
is that an entire area - which might be called deriving psychological
conclusions from barin scan data - is getting massive funding and massive
attention, and yet it is quite arguably in an Emperor's New Clothes state.
 In other words, the conclusions being drawn are (for a variety of reasons)
of very dubious quality.


If you look at any field large enough, there will be bad science.

According to the significant number of people who criticize it, this field
appears to be dominated by bad science.  This is not just an isolated case.



That's a whole new level of alarm, relevant for anyone trying to learn
from neuroscience, but it requires stronger substantiation, mere 50
papers that got confused with statistics don't do it justice.



You are missing the context:  I mentioned this because of an earlier 
discussion centered on the paper by Trevor Harley, and the follow-up 
paper that he and I wrote together.   We are only two people among many 
who are making various kinds of criticisms.  This is certainly not 
*just* a case of 50 papers which did some not very good statistics - 
taking it that way would be a complete misunderstanding of the situation.


The reason I flagged this most recent paper was that some people seemed 
to be under the impression, from that earlier discussion, that perhaps 
this was just my imagination.  I wanted to point out that there are 
increasing nummbers of people making the same Emperor-Has-No-Clothes 
complaint.


Sooner or later this will become big news in the scientific community - 
someone will write a big expose, and the neuroscience people will find 
themselves under fire for having wasted everyone's time with so much 
well-funded bogus science.  But right now we are in the early phase, 
much as was a few years ago when you could read all about the mortgage 
crisis and the financial meltdown in the left-wing press, but everyone 
else was ignoring them.  What you are getting is an inside track on this 
below-ground scandal, coming from me, a few years before you read it on 
the front pages of Scientific American or Nature.





Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Bayesian surprise attracts human attention

2009-01-15 Thread Richard Loosemore

Bob Mottram wrote:

2009/1/15 Ronald C. Blue ronb...@u2ai.us:

Bayesian surprise attracts human attention http://tinyurl.com/77p9xo




In my opinion any research carried out at
universities using public money should be available to the public,
without additional charges.


Agreed.

 Sounds interesting.

Well... not so much.

This is what I have gleaned from the abstract.

The researchers demolished an old idea about attention (from the 1950s) 
and pretended to test a new idea.


In fact the new idea is so well-established that everyone takes it for 
granted:  attention shift is driven by surprise or novelty or 
unexpectedness.


Perspective:

Recall that there was once a theory that heat was a fluid that passed 
between bodies (caloric).  Then, that old theory was superceded by the 
new idea that heat and temperature were just average molecular motion.


Now imagine that someone came along and published a paper claiming to 
have discovered this second idea, years after it had become common 
knowledge.  But what you find, when you read the details, is that what 
they actual did was just take a thermometer and draw a scale on the 
outside of it - a completely arbitrary scale that they made up - and 
then declare that BECAUSE they slapped the scale on the outside, 
THEREFORE they have validated or proved or demonstrated the idea of 
temperature being molecular motion, rather than the movements of caloric 
fluid.


This is exactly what these people have just done with the notion of 
surprise.  It adds nothing useful to what we know.


If they had shown that there is a mechanism that actually computes the 
bayesian probabilities, then governs the attention shift using the 
results of that calculation, that would have been progress.  But just 
finding something that covaries with novelty is like shooting fish in a 
barrel.


Of course, it's not like these are the only people making this kind of 
non-progress  ;-)






Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


[agi] Doubts raised over brain scan findings

2009-01-14 Thread Richard Loosemore


For anyone interested in recent discussions of neuroscience and the 
level of scientific validity to the various brain-scann claims, the 
study by Vul et al, discussed here:


http://www.newscientist.com/article/mg20126914.700-doubts-raised-over-brain-scan-findings.html

and available here:

http://www.pashler.com/Articles/Vul_etal_2008inpress.pdf

... is a welcome complement to the papers by Trevor Harley (and myself).


The title of the paper is Voodoo Correlations in Social Neuroscience, 
and that use of the word voodoo pretty much sums up the attitude of a 
number of critics of the field.


We've attacked from a different direction, but we had a wide range of 
targets to choose, believe me.


The short version of the overall story is that neuroscience is out of 
control as far as overinflated claims go.





Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Richard Loosemore

Vladimir Nesov wrote:

On Wed, Jan 14, 2009 at 10:59 PM, Richard Loosemore r...@lightlink.com wrote:

For anyone interested in recent discussions of neuroscience and the level of
scientific validity to the various brain-scann claims, the study by Vul et
al, discussed here:

http://www.newscientist.com/article/mg20126914.700-doubts-raised-over-brain-scan-findings.html

and available here:

http://www.pashler.com/Articles/Vul_etal_2008inpress.pdf

... is a welcome complement to the papers by Trevor Harley (and myself).


The title of the paper is Voodoo Correlations in Social Neuroscience, and
that use of the word voodoo pretty much sums up the attitude of a number
of critics of the field.

We've attacked from a different direction, but we had a wide range of
targets to choose, believe me.

The short version of the overall story is that neuroscience is out of
control as far as overinflated claims go.



Richard, even if your concerns are somewhat valid, why is it
interesting here? It's not like neuroscience is dominated by
discussions of (mis)interpretation of results, they are collecting
data, and with that they are steadily getting somewhere.


I don't understand.

The whole point about the paper referenced above is that they are 
collecting (in a large number of cases) data that is just random noise.


And in the work that I did, analyzing several neuroscience papers, the 
conclusion was that many of their conclusions were unfounded.


That is exactly the opposite of what you just said:  they are not 
steadily getting somewhere they are filling the research world with 
noise.  I do not understand how you can see what was said in the above 
paper, and yet say what you just said.


Bear in mind that we are targeting the (extremely large number of) 
claims of psychological validity that are coming out of the 
neuroscience community.  If they collect data and do not make 
psychological claims, all power to them.



I don't particularly want to get into an argument about it.  It was just 
a little backup information for what I said before.





Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Richard Loosemore

Vladimir Nesov wrote:

On Thu, Jan 15, 2009 at 3:03 AM, Richard Loosemore r...@lightlink.com wrote:

The whole point about the paper referenced above is that they are collecting
(in a large number of cases) data that is just random noise.



So what? The paper points out a methodological problem that in itself
has little to do with neuroscience.


Not correct at all:  this *is* neuroscience.  I don't understand why you 
say that it is not.



The field as a whole is hardly
mortally afflicted with that problem


I mentioned it because there is a context in which this sits.  The 
context is that an entire area - which might be called deriving 
psychological conclusions from barin scan data - is getting massive 
funding and massive attention, and yet it is quite arguably in an 
Emperor's New Clothes state.  In other words, the conclusions being 
drawn are (for a variety of reasons) of very dubious quality.


(whether it's even real or not).

It is real.


If you look at any field large enough, there will be bad science.


According to the significant number of people who criticize it, this 
field appears to be dominated by bad science.  This is not just an 
isolated case.



How
is it relevant to study of AGI?


People here are sometimes interested in cognitive science matters, and 
some are interested in the concept of building an AGI by brain 
emulation.  Neuroscience is relevant to that.


Beyond that, this is just an FYI.

I really do not care to put much effort into this.  If people are 
interested, they can read the paper.  But if they doubt the validity of 
the entire idea that there is a problem with neuroscience claims about 
psychological processes, I'm afraid I do not have the time to argue, 
simply because the level of general expertise here is not such that I 
can discuss it without explaining the whole critique from scratch.


As you say, it is not important enough, in an AGI context, to spend much 
time on.





Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] initial reaction to A2I2's call center product

2009-01-12 Thread Richard Loosemore

Ben Goertzel wrote:

AGI company A2I2 has released a product for automating call center
functionality, see...

http://www.smartaction.com/index.html

Based on reading the website here is my initial reaction

Certainly, automating a higher and higher percentage of call center
functionality is a worthy goal, and a place one would expect AGI
technology to be able to play a role.  Current automated call center
systems either provide extremely crude functionality, or else require
extensive domain customization prior to each deployment; and they
still show serious shortcomings even after such customization, due
largely to their inability to interpret the user's statements in terms
of an appropriate contextual understanding.  The promise AGI
technology offers for this domain is the possibility of responding to
user statements with the contextual understanding that only general
intelligence can bring.

The extent to which A2I2 has really solved this very difficult
problem, however, is impossible to assess without actually trying the
product.  What they have might be an incremental improvement over
existing technologies, or it might be a quantum leap forward; based on
the information provided, there is no way to tell.  For example
http://www.tuvox.com/ is a quite sophisticated competitor and it would
be interesting to see a head to head competition between their system
and A2I2's.

The available materials tell little about the underlying technology.
Claims such as


Functionally, it recognizes speech, understands the caller's meaning
and intent, remembers the evolving context of the conversation, and
obtains information in real time from databases and websites.


are evocative but could be interpreted in many different ways.
Interpreted most broadly, this would imply that A2I2 has achieved a
human-level AI system; but if this were the case, there would be
better things to do with it than automate call centers.  Based on the
available information, it's not clear just how narrowly one must
interpret these assertions to obtain agreement with the truth.  What
is clear is that they are taking an adaptive learning based approach
rather than an approach based on extensive hand-coding of linguistic
resources, which is interesting, and vaguely reminiscent of Robert
Hecht-Nielsen's neural net approach to language processing.


Have you asked Peter if he would be willing to share a demonstration 
with us, perhaps at AGI-09?


I agree that the marketing rhetoric could be interpreted anywhere from 
incremental improvement to quantum leap:  but my money is on something 
relatively incremental.


Remember:  something like OCR, when it was first available, seemed 
amazing when it could boast a pickup efficiency of 95%.  But I have had 
the unenviable task of proofreading an entire (Welsh) dictionary in 
which the OCR did 95% of the work and I did the other 5%.  It was a 
nightmare.


That last 5% is where all the action is.



Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Identity abstraction

2009-01-11 Thread Richard Loosemore

Ronald C. Blue wrote:
I would agree that we are mutually close in ideas.  Also current 
programming efforts at AI will not be wasted because those action packs 
can be used as seed for any AGI machine that has self control and 
awareness.  Actually there are many paths to new AGI machines, we even 
went around our own original patent application but maintained the 
fundamental of the theoretical model of AGI based on the Correlational 
Holographic Opponent Processing model of the human mind.  So far it 
seems as our approach is the best direction relative to others published 
research and models.  Future direction and success is unknowable.
 
You are correct that the heart of the AGI device is modeling the 
opponent-process which is creative into a physical circuit duplicating 
these paradoxical analogies.  In so doing we have
discover some rather odd unexpected behavior.  Considering the 13 or B 
problem which is really based on our experiences of 13 or B.  A 
primitive African cattle herdsman might see the milk producing breast of 
the cows.   The point of view is that the observer collapses the meaning 
from stimuli and the stimuli do not cause the meaning.  Only an AGI that 
is self aware can do this.  The odd thing about the circuit is that you 
get different results when you measure either side of the opponent 
process circuit.  Also the measurement destroys the information.  If you 
don't measure it, it works. 
 
Why B over 13  Memory or priming and habituation are keys for 
creative shifts in perception.  Goodness of fit of a stimulus after 
rotational and interference analysis results in a temporary conclusion.  
That conclusion can be habituated which cause another probability to 
express itself.
 
Using your bicycle or bull pictures as an example

http://cn.cl2000.com/history/beida/ysts/image18/jpg/02.jpg
 
MINUS

http://www.chu.cam.ac.uk/images/departments/classics_bulls_head_rhyton.jpg
 
 
EQUALS
 
 
So you can see that there is a goodness of fit between the two stimuli.  
Also notice that new
knowledge is a product of allowing fact or identities to interact with 
each other.
 
My program skills were inadequate compared to the ability of the human 
brain for creating the

analysis but you got the idea.
 
A average woman is a beautiful woman.  The average woman is the average 
of all women we have ever met.
A average theory of all the facts is a beautiful theory of the facts 
that we know.  Variability comes from the realization that
each person has a rich history of experiences.   Those stored average 
memories of identities is what we use to
judge our current experiences and occasion jump out of of comfort zone.  
Once the jump has occurred there is no going back.
 
We have two brains (actually 4).  One brain see tiny details and one 
brain sees the whole or big emotional
pictures.  When we combine that information we making a great leap 
forward.  A good AGI machine has to do the same.
 
Ron


Please do not include images in your posts.

The usual etiquette is to put them on a web server somewhere and give 
pointers in your message sent to the list.


Thankyou



Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-10 Thread Richard Loosemore

Mike Tintner wrote:

Richard,


You missed Mike Tintner's explanation . . . .


Mark,

 Right 

So you think maybe what we've got here is a radical influx of globally 
entangled free-association bosons?



Richard,

Q.E.D.  Well done.

Now tell me how you connected my ridiculous [or however else you might 
want to style it] argument with your argument re bosons - OTHER than 
by free association? What *prior* set of associations in your mind, or 
prior, preprogrammed set of rules, what logicomathematical thinking 
enabled you to form that connection?  (And it would be a good idea to 
apply it to your previous joke re Blue - because they must be *generally 
applicable* principles)


And what prior principles enabled you to spontaneously and creatively 
form the precise association of radical influx of globally entagled 
free-association bosons - to connect RADICAL INFLUX with GLOBALLY 
ENTANGLED ..and FREE ASSOCIATION and BOSONS.


You were being v. funny, right?  But humour is domain-switching (which 
you do multiple times above) and that's what you/AGI can't do or explain 
computationally.


***

Ironically, before I saw your post I had already written (and shelved) a 
P.S.  Here it is:


P.S. Note BTW - because I'm confident you're probably still thinking 
what's that weird nutter on about? what's this got to do with AGI? - 
the very best evidence for my claim. That claim is now that the brain is


* potentially infinitely domain-switching on both

a) a basic level,  and

b) a meta-level -

i.e. capable of forming endless new connections/associations on a higher 
level too and so, forming infinite new modes of reasoning, ( new *ways* 
of associating ideas as well as new association)


The very best evidence are *logic and mathematics themselves*. For logic 
and mathematics ceaselessly produce new branches of themselves. New 
logics. New numbers, New kinds of geometry. *New modes of reasoning.*


And an absolutely major problem for logic and mathematics (and current 
computation) is that they *cannot explain themselves* - cannot explain 
how these new modes of reasoning are generated/ There are no logical and 
mathematical or other formal ways of explaining these new branches.


Rational numbers cannot be used to deduce irrational numbers and thence 
imaginary numbers. Trigonometry cannot be used to deduce calculus. 
Euclidean geometry cannot be used to deduce riemannian to deduce 
topology. And so on. Aristotelian logic cannot explain fuzzy logic 
cannot explain PLN.


Logicomathematical modes of reasoning are *not* generated 
logicomathematically.but creatively-as both Ben, I think, and 
certainly Franklin have acknowledged.


And clearly the brain is capable of forming infinitely new logics and 
mathematics - infinite new forms of reasoning -  by 
*non-logicomathematical*/*nonformal* means. By, I suggest,  free 
association among other means.




It's easy to make cheap, snide comments. But can either of you actually 
engage directly with the problem of domain-switching, and argue 
constructively about  particular creative problems and thinking - using 
actual evidence? I've seen literally no instances from either of you (or 
indeed, though this may at first seem surprising and may need a little 
explanation - anyone in the AI community).


let's take an actual example of  good creative thinking happening on the 
fly - and what I've called  directed free association -


It's by one Richard Loosemore. You as well as others thought pretty 
creatively about the problem of the engram a while back. Here's the 
transcript of that  thinking - as I said, good creative thinking, really 
trying to have new ideas (as opposed to just being snide here).:


Now perhaps you can tell me what prior *logic* or programming produced 
the flow of your own ideas here? How do you get from one to the next?


Richard: Now you're just trying to make me think ;-). 1.

Okay, try this. 2.

[heck, you don't have to:  I am just playing with ideas here...]  3.

The methylation pattern has not necessarily been shown to *only* store
information in a distributed pattern of activation - the jury's out on
that one (correct me if I'm wrong). 4.5

Suppose that the methylation end caps are just being used as a way
station for some mechanism whose *real* goal is to make modifications to
 some patterns in the junk DNA.  6. So, here I am suggesting that the junk
DNA of any particular neuron is being used to code for large numbers of
episodic memories (one memory per DNA strand, say), with each neuron
being used as a redundant store of many episodes. 7.  The same episode is
stored in multiple neurons, but each copy is complete.  8. When we observe
changes in the methylation patterns, perhaps these are just part of the
transit mechanism, not the final destination for the pattern.  9. To put it
in the language that Greg Bear would use, the endcaps were just part of
the radio system. (http

Re: [agi] Identity abstraction

2009-01-10 Thread Richard Loosemore

Harry Chesley wrote:

Thanks for the more specific answer. It was the most illuminating of the
ones I've gotten. I realize that this isn't really the right list for
questions about human subjects experiments; just thought I'd give it a try.


In general no.

But that is my specialty.


Richard Loosemore





Richard Loosemore wrote:

Harry Chesley wrote:

On 1/9/2009 9:45 AM, Richard Loosemore wrote:

 There are certainly experiments that might address some of your
 concerns, but I am afraid you will have to acquire a general
 knowledge of what is known, first, to be able to make sense of what
 they might tell you.  There is nothing that can be plucked and
 delivered as a direct answer.

I was not asking for a complete answer. I was asking for experiments
that shed light on the area. I don't expect a mature answer, only
more food for thought. Your answer that there are such experiments,
but you're not going to tell me what they are is not a useful one.
Don't worry about whether I can digest the experimental context.
Maybe I know more than you assume I do.

What I am trying to say is that you will find answers that are
partially relevant to your question scattered across about a third of
the chapters of any comprehensive introduction to cognitive
psychology.  And then, at a deeper level, you will find something of
relevance in numerous more specialized documents.  But they are so
scattered that I could not possibly start to produce a comprehensive
list!

For example, the easiest things to mention are object perception
within a developmental psychology framework (see a dev psych textbook
for entire chapters on that);  the psycholgy of concepts will
involve numerous experiments that require judgements of whether
objects are same or different (but in each case the experiment will
not be focussed on answering the direct question you might be
asking);  the question of how concepts are represented sometimes
involves the dialectic between the prototype and exemplar camps
(see book by Smith and Medin), which partially touches on the
question;  there are discussions in the connectionist literature about
the problem of type-token discrimination (see Norman's chapter at the
end of the second PDP volume - McClelland and Rumelhart 1986/7);  then
there is neurospychology of naming... see books on psychololinguistics
like the one written by Trevor Harley for a comprehensive introduction
to that area);  there are also vast numbers of studies to do with
recognition of abstract concepts using neural nets (you could pick up
three or four papers that I wrote in the 1990s which center on the
problem of extracting the spelled for of words using phoneme clusters
if you look at the publications section of my website, susaro.com, but
there are thousands of others).

Then, you could also wait for my own textbook (in preparation) which
treats the formation of concepts and the mechanisms of abstraction
from the Molecular perspective.


These are just examples picked at random.  none of them answer your
question, they just give you pieces of the puzzle, for you to assemble
into a half-working answer after a couple of years of study ;-).


Anyone who knew the field would say, in response to your inquiry, But
what exactly do you mean by the question?, and they would say
this because your question touches upon about six or seven major areas
of inquiry, in the most general possible terms.





Richard Loosemore




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-09 Thread Richard Loosemore



Ronald C. Blue wrote:

[snip] [snip] ... chaos stimulation because ... correlational
wavelet opponent processing machine ... globally entangled ...
Paul rf trap ... parallel modulating string pulses ... a relative
zero energy value or opponent process  ...   phase locked ...
parallel opponent process ... reciprocal Eigenfunction ...
opponent process ... summation interference ... gaussian
reference rf trap ... oscillon output picture ... locked into the
forth harmonic ... ... entangled with its Eigenfunction .. [snip]
 That is what entangled memory means.



Okay, I got that.

But how can it dequark the tachyon antimatter containment field?



Richard Loosemore







Mark Waser wrote:

But how can it dequark the tachyon antimatter containment field?


Richard,

You missed Mike Tintner's explanation . . . .

You're not thinking your argument through. Look carefully at my 
spontaneous COW - DOG - TAIL - CURRENT CRISIS - LOCAL VS GLOBAL 
THINKING - WHAT A NICE DAY - MUST GET ON- CANT SPEND MUCH MORE TIME

ON THIS etc. etc



It can do this partly because a) single ideas have multiple, often
massively mutiple, idea/domain connections in the human brain, and
allow one to go off in any of multiple tangents/directions b)
humans have many things - and therefore multiple domains - on their
 mind at the same time concurrently - and can switch as above from
the immediate subject to some other pressing subject domain (e.g.
from economics/politics (local vs global) to the weather (what a
nice day).


So maybe it's worth taking 20 secs. of time - producing your own 
chain-of-free-association starting say with MAHONEY and going on

for another 10 or so items - and trying to figure out how



Mark,

 Right 

So you think maybe what we've got here is a radical influx of globally 
entangled free-association bosons?





Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Identity abstraction

2009-01-09 Thread Richard Loosemore

Harry Chesley wrote:
I'm trying to get an idea of how our minds handle the tension between 
identity and abstraction, and it occurs to me that there have probably 
been human subject experiments that would shed light on this. Does 
anyone know of any?


The basic issue: On the one hand, we identify two objects as being the 
same one (having the same identity), even when encountered at different 
times or from different perspectives. At least a part of how we do this 
is very likely a matter of noticing that the two objects have common 
features which are unlikely to occur together at random. On the other 
hand, over time we make abstractions of situations that we encounter 
repeatedly, most likely by removing details that are not in common 
between the instances. Yet it's these very details that let us derive 
identity.


So how do we remember abstractions that are dependent on identity? It 
seems that there must be experiments or evidence from brain-damaged 
individuals that would give clues.


Example: I may notice over time that whenever object A is smaller than 
object B and object B is smaller than object C, then object A is smaller 
than object C. Note that I have to give them names in order to even 
state the problem. Internally, we might do likewise and assign names, in 
which case there might be a part of the brain that performs the naming 
and could be damaged. Or we might go back to the original cases 
(case-based reasoning). Or we might store references to the original 
object instances from which we abstracted the general rule, which would 
provide unique identity. The later two may be distinguishable 
experimentally by choosing clever instances to abstract from.


Anyone know of any research that sheds light on this area?


It is impossible to answer your question the way it is posed, because it 
needs to become more specific before it can be answered, and on the way 
to becoming more specific, you will find yourself drawn into an enormous 
maze of theoretical assumptions and empirical data.


There are indeed parts of the brain that are involved in naming, but 
what we know could fill an entire book (or several) and it is organized 
according to our observations of what kinds of behaviors occur when some 
thing goes wrong, or when a particular experimental manipulation is 
performed.  Those behaviors do not, by themselves, answer your astract 
questions about the underlying structures and mechanisms ... those 
structures and mechanisms are the subject of debate.


Essentially, you are asking for cognitive science to be more mature than 
it is at the moment.


There are certainly experiments that might address some of your 
concerns, but I am afraid you will have to acquire a general knowledge 
of what is known, first, to be able to make sense of what they might 
tell you.  There is nothing that can be plucked and delivered as a 
direct answer.




Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Identity abstraction

2009-01-09 Thread Richard Loosemore

Harry Chesley wrote:

On 1/9/2009 9:45 AM, Richard Loosemore wrote:

 There are certainly experiments that might address some of your
 concerns, but I am afraid you will have to acquire a general
 knowledge of what is known, first, to be able to make sense of what
 they might tell you.  There is nothing that can be plucked and
 delivered as a direct answer.


I was not asking for a complete answer. I was asking for experiments 
that shed light on the area. I don't expect a mature answer, only more 
food for thought. Your answer that there are such experiments, but 
you're not going to tell me what they are is not a useful one. Don't 
worry about whether I can digest the experimental context. Maybe I know 
more than you assume I do.


What I am trying to say is that you will find answers that are partially 
relevant to your question scattered across about a third of the chapters 
of any comprehensive introduction to cognitive psychology.  And then, at 
a deeper level, you will find something of relevance in numerous more 
specialized documents.  But they are so scattered that I could not 
possibly start to produce a comprehensive list!


For example, the easiest things to mention are object perception 
within a developmental psychology framework (see a dev psych textbook 
for entire chapters on that);  the psycholgy of concepts will involve 
numerous experiments that require judgements of whether objects are same 
or different (but in each case the experiment will not be focussed on 
answering the direct question you might be asking);  the question of how 
concepts are represented sometimes involves the dialectic between the 
prototype and exemplar camps (see book by Smith and Medin), which 
partially touches on the question;  there are discussions in the 
connectionist literature about the problem of type-token discrimination 
(see Norman's chapter at the end of the second PDP volume - McClelland 
and Rumelhart 1986/7);  then there is neurospychology of naming... see 
books on psychololinguistics like the one written by Trevor Harley for a 
comprehensive introduction to that area);  there are also vast numbers 
of studies to do with recognition of abstract concepts using neural nets 
(you could pick up three or four papers that I wrote in the 1990s which 
center on the problem of extracting the spelled for of words using 
phoneme clusters if you look at the publications section of my website, 
susaro.com, but there are thousands of others).


Then, you could also wait for my own textbook (in preparation) which 
treats the formation of concepts and the mechanisms of abstraction from 
the Molecular perspective.



These are just examples picked at random.  none of them answer your 
question, they just give you pieces of the puzzle, for you to assemble 
into a half-working answer after a couple of years of study ;-).



Anyone who knew the field would say, in response to your inquiry, But 
what exactly do you mean by the question?, and they would say this 
because your question touches upon about six or seven major areas of 
inquiry, in the most general possible terms.






Richard Loosemore




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Richard Loosemore

Ronald C. Blue wrote:

[snip]
[snip] ... chaos stimulation because ... correlational wavelet opponent 

 processing machine ... globally entangled ... Paul rf trap ... parallel
 modulating string pulses ... a relative zero energy value or

opponent process  ...   phase locked ... parallel opponent process
... reciprocal Eigenfunction ...  opponent process ... 
summation interference ... gaussian reference rf trap ...

 oscillon output picture ... locked into the forth harmonic ...
 ... entangled with its Eigenfunction ..
 
[snip]
 
That is what entangled memory means.



Okay, I got that.

But how can it dequark the tachyon antimatter containment field?



Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] [Science Daily] Our Unconscious Brain Makes The Best Decisions Possible

2009-01-01 Thread Richard Loosemore

Jim Bromer wrote:

On Mon, Dec 29, 2008 at 4:02 PM, Richard Loosemore r...@lightlink.com wrote:

 My friend Mike Oaksford in the UK has written several
papers giving a higher level cognitive theory that says that people are, in
fact, doing something like bayesian estimation when then make judgments.  In
fact, people are very good at being bayesians, contra the loud protests of
the I Am A Bayesian Rationalist crowd, who think they were the first to do
it.
Richard Loosemore


That sounds like an easy hypothesis to test.  Except for a problem.
Previous learning would be relevant to the solving of the problems and
would produce results that could not be totally accounted for.
Complexity, in the complicated sense of the term, is relevant to this
problem, both in the complexity of how previous learning that might
influence decision making and the possible (likely) complexity of the
process of judgment itself.

If extensive tests showed that people overwhelmingly made judgments
that were Bayesianesque then this conjecture would be important.  The
problem is, that since the numerous possible influences of previous
learning has to be ruled out, I would suspect that any test for
Bayesian-like reasoning would have to be kept so simple that it would
not add anything new to our knowledge.


Uh... you have to actually read the research to know how they came to 
these conclusions.


Take it from me, they are mite bit ahead of you on this one :-).



Richard Loosemore





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] [Science Daily] Our Unconscious Brain Makes The Best Decisions Possible

2008-12-29 Thread Richard Loosemore

Lukasz Stafiniak wrote:

http://www.sciencedaily.com/releases/2008/12/081224215542.htm

Nothing surprising ;-)


Nothing surprising?!!

8-) Don't say that too loudly, Yudkowsky might hear you. :-)

The article is a bit naughty when it says, of Tversky and Kahnemann, 
that ...this has become conventional wisdom among cognition 
researchers.  Actually, the original facts were interpreted in a 
variety of ways, some of which strongly disagreed with T  K's original 
intepretation, just like this one you reference above.  The only thing 
that is conventional wisdom is that the topic exists, and is the subject 
of dispute.


And, as many people know, I made the mistake of challenging Yudkowsky on 
precisely this subject back in 2006, when he wrote an essay strongly 
advocating TK's original intepretation.  Yudkowsky went completely 
berserk, accused me of being an idiot, having no brain, not reading any 
of the literature, never answering questions, and generally being 
something unspeakably worse than a slime-oozing crank.  He literally 
wrote an essay denouncing me as equivalent to a flat-earth believing 
crackpot.


When I suggested that someone go check some of his ravings with an 
outside authority, he banned me from his discussion list.


Ah, such are the joys of being speaking truth to power(ful idiots).

;-)

As far as this research goes, it sits somewhere down at the lower end of 
the available theories.  My friend Mike Oaksford in the UK has written 
several papers giving a higher level cognitive theory that says that 
people are, in fact, doing something like bayesian estimation when then 
make judgments.  In fact, people are very good at being bayesians, 
contra the loud protests of the I Am A Bayesian Rationalist crowd, who 
think they were the first to do it.






Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Alternative Cicuitry

2008-12-28 Thread Richard Loosemore

John G. Rose wrote:

Reading this -

http://www.nytimes.com/2008/12/23/health/23blin.html?ref=science

 

makes me wonder what other circuitry we have that's discouraged from 
being accepted.


This blindsight news is not really news.  It has been known for decades 
that there are two separate visual pathways in the brain, which seem to 
process what information and vision for action information.


So this recent hubbub is just a new, more dramatic demonstration of 
something that has been known about for a long time.


This is my take on what is going on here:

The interesting fact is that the vision for action pathway can operate 
without conscious awareness.  It is an autopilot.  What this seems to 
imply is that at some early point in evolution there was only that 
pathway, and there was no general ability to think about higher level 
aspects of the world.


Then the higher cognitive mechanisms developed, while the older system 
remained in place.  The higher cognitive mechanisms grew their own 
system for analyzing visual input (the 'what pathway), but it turned 
out that the brain could still use the older pathway in parallel with 
the new, so it was left in place.


I am going to add this as a prediction derived from the model of 
consciousness in my AGI-09 paper:  the prediction is that when we 
uncover the exact implementation details of the analysis mechanism 
that I discussed in the paper, we will find that the AM is entirely 
within the higher cognitive system, and that the vision-for-action 
pathway just happens to be beyond the scope of what the AM can access. 
It is because it is outside that scope that no consciousness is 
associated with what that pathway does.


(Unfortunately, of course, this prediction cannot be fully tested until 
we can pin down the exact details of how the analysis mechanism gets 
implemented in the brain.  The same is true of the other predictions).






Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Richard Loosemore

Philip Hunt wrote:

2008/12/26 Matt Mahoney matmaho...@yahoo.com:

I have updated my universal intelligence test with benchmarks on about 100 
compression programs.


Humans aren't particularly good at compressing data. Does this mean
humans aren't intelligent, or is it a poor definition of intelligence?


Although my goal was to sample a Solomonoff distribution to measure universal
intelligence (as defined by Hutter and Legg),


If I define intelligence as the ability to catch mice, does that mean
my cat is more intelligent than most humans?

More to the point, I don't understand the point of defining
intelligence this way. Care to enlighten me?



This may or may not help, but in the past I have pursued exactly these 
questions, only to get such confusing, evasive and circular answers, all 
of which amounted to nothing meaningful, that eventually I (like many 
others) have just had to give up and not engage any more.


So, the real answers to your questions are that no, compression is an 
extremely poor definition of intelligence; and yes, defining 
intelligence to be something completely arbitrary (like the ability to 
catch mice) is what Hutter and Legg's analyses are all about.


Searching for previous posts of mine which mention Hutter, Legg or AIXI 
will probably turn up a number of lengthy discussion in which I took a 
deal of trouble to debunk this stuff.


Feel free, of course, to make your own attempt to extract some sense 
from it all, and by all means let me know if you eventually come to a 
different conclusion.





Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Introducing Steve's Theory of Everything in cognition.

2008-12-25 Thread Richard Loosemore

Steve Richfield wrote:

Richard,

On 12/25/08, *Richard Loosemore* r...@lightlink.com 
mailto:r...@lightlink.com wrote:


Steve Richfield wrote:

Ben, et al,
 After ~5 months of delay for theoretical work, here are the
basic ideas as to how really fast and efficient automatic
learning could be made almost trivial. I decided NOT to post the
paper (yet), but rather, to just discuss the some of the
underlying ideas in AGI-friendly terms.
 Suppose for a moment that a NN or AGI program (they can be
easily mapped from one form to the other


... this is not obvious, to say the least.  Mapping involves many
compromises that change the functioning of each type ...

 
There are doubtless exceptions to my broad statement, but generally, 
neuron functionality is WIDE open to be pretty much ANYTHING you choose, 
including that of an AGI engine's functionality on its equations.
 
In the reverse, any NN could be expressed in a shorthand form that 
contains structure, synapse functions, etc., and an AGI engine could be 
built/modified to function according to that shorthand.
 
In short, mapping between NN and AGI forms presumes flexibility in the 
functionality of the target form. Where that flexibility is NOT present, 
e.g. because of orthogonal structure, etc., then you must ask whether 
something is being gained or lost by the difference. Clearly, any 
transition that involves a loss should be carefully examined to see if 
the entire effort is headed in the wrong direction, which I think was 
your original point here.



There is a problem here.

When someone says X and Y can easily be mapped from one form to the 
other there is an implication that they are NOt suggesting that we go 
right down to the basic constituents of both X and Y in order to effect 
the mapping.


Thus:  Chalk and Cheese can easily be mapped from one to the other 
 trivially true if we are prepared to go down to the common 
denominator of electrons, protons and neutrons.  But if we stay at a 
sensible level then, no, these do not map onto one another.


Similarly, if you claim that NN and regular AGI map onto one another, I 
assume that you are saying something more substantial than that these 
two can both be broken down into their primitive computational parts, 
and that when this is done they seem equivalent.


NN and regular AGI, they way they are understood by people who 
understand them, have very different styles of constructing intelligent 
systems.  Sure, you can code both in C, or Lisp, or Cobol, but that is 
to trash the real meaning of are easily mapped onto one another.





), instead of operating on objects (in an

object-oriented sense)


Neither NN nor AGI has any intrinsic relationship to OO.

 
Clearly I need a better term here. Both NNs and AGIs tend to have 
neurons or equations that reflect the presence (or absence) of various 
objects, conditions, actions, etc. My fundamental assertion is that if 
you differentiate the inputs so that everything in the entire network 
reflects dp/dt instead of straight probabilities, then the network works 
identically, but learning is GREATLY simplified.


Seems like a simple misunderstanding:  you were not aware that object 
oriented does not mean the same as saying that there are fundamental 
atomic constituents of a representation.





 


, instead, operates on the rate-of-changes in the

probabilities of objects, or dp/dt. Presuming sufficient
bandwidth to generally avoid superstitious coincidences, fast
unsupervised learning then becomes completely trivial, as like
objects cause simultaneous like-patterned changes in the inputs
WITHOUT the overlapping effects of the many other objects
typically present in the input (with numerous minor exceptions).


You have already presumed that something supplies the system with
objects that are meaningful.  Even before your first mention of
dp/dt, there has to be a mechanism that is so good that it never
invents objects such as:

Object A:  A person who once watched all of Tuesday Welds movies in
the space of one week or

Object B:  Something that is a combination of Julius Caesar's pinky
toe and a sour grape that Brutus' just spat out or

Object C:  All of the molecules involved in a swiming gala that
happen to be 17.36 meters from the last drop of water that splashed
from the pool.

You have supplied no mechanism that is able to do that, but that
mechanism is 90% of the trouble, if learning is what you are about.

 
With prior unsupervised learning you are 100% correct. However none of 
the examples you gave involved temporal simultaneity. I will discuss B 
above because it is close enough to be interesting.
 
If indeed someone just began to notice something interesting about 
Caesar's pinkie toe *_as_* they just began to notice the taste

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-25 Thread Richard Loosemore

Ed Porter wrote:
Why is it that people who repeatedly and insultingly say other people’s 
work or ideas are total nonsense -- without any reasonable 
justification -- are still allowed to participate in the discussion on 
the AGI list?


Because they know what they are talking about.

And because they got that way by having a low tolerance for fools, 
nonsense and people who can't tell the difference between the critique 
of an idea and a personal insult.


;-)




Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-24 Thread Richard Loosemore



Why is it that people who repeatedly resort to personal abuse like this 
are still allowed to participate in the discussion on the AGI list?





Richard Loosemore





Ed Porter wrote:

Richard,

 

You originally totally trashed Tononi's paper, including its central 
core, by saying:


 


It is, for want of a better word, nonsense.  And since people take me to

task for being so dismissive, let me add that it is the central thesis

of the paper that is nonsense:  if you ask yourself very carefully

what it is he is claiming, you can easily come up with counterexammples

that make a mockery of his conclusion.\

 


When asked to support your statement that

 

you can easily come up with counterexammples that make a mockery of his 
conclusion 


 

you refused.  You did so by grossly mis-describing Tononi’s paper (for 
example it does not include “pages of …math”, of any sort, and 
particularly not “pages of irrelevant math”) and implying its 
mis-described faults so offended your delicate sense of AGI propriety 
that re-reading it enough to find support for your extremely critical 
(and perhaps totally unfair) condemnation would be either too much work 
or too emotionally painful.


 

You said the counterexamples to the core of this paper were easy to come 
up with, but you can’t seem to come up with any.


 


Such stunts have the appearance of being those of a pompous windbag.

 


Ed Porter

 

P.S. Your postscript is not sufficiently clear to provide much support 
for your position.


P.P.S. You below  

 

 


-Original Message-
From: Richard Loosemore [mailto:r...@lightlink.com]
Sent: Tuesday, December 23, 2008 9:53 AM
To: agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke  was  Building a 
machine that can learn from experience


 


Ed Porter wrote:


 Richard,







 Please describe some of the counterexamples, that you can easily come up



 with, that make a mockery of Tononi's conclusion.







 Ed Porter


 


Alas, I will have to disappoint.  I put a lot of effort into

understanding his paper first time around, but the sheer agony of

reading (/listening to) his confused, shambling train of thought, the

non-sequiteurs, and the pages of irrelevant math  that I do not need

to experience a second time.  All of my original effort only resulted in

the discovery that I had wasted my time, so I have no interest in

wasting more of my time.

 


With other papers that contain more coherent substance, but perhaps what

looks like an error, I would make the effort.  But not this one.

 


It will have to be left as an exercise for the reader, I'm afraid.

 

 

 


Richard Loosemore

 

 


P.S.   A hint.  All I remember was that he started talking about

multiple regions (columns?) of the brain exchanging information with one

another in a particular way, and then he asserted a conclusion which, on

quick reflection, I knew would not be true of a system resembling the

distributed one that I described in my consciousness paper (the

molecular model).  Knowing that his conclusion was flat-out untrue for

that one case, and for a whole class of similar systems, his argument

was toast.

 

 

 

 

 

 

 

 

 


 -Original Message-



 From: Richard Loosemore [mailto:r...@lightlink.com]



 Sent: Monday, December 22, 2008 8:54 AM



 To: agi@v2.listbox.com



 Subject: Re: [agi] SyNAPSE might not be a joke  was  Building a



 machine that can learn from experience







 Ed Porter wrote:



 I don't think this AGI list should be so quick to dismiss a $4.9 million



 dollar grant to create an AGI.  It will not necessarily be vaporware.



 I think we should view it as a good sign.






 







 Even if it is for a project that runs the risk, like many DARPA projects



 (like most scientific funding in general) of not necessarily placing its



 money where it might do the most good --- it is likely to at least



 produce some interesting results --- and it just might make some very



 important advances in our field.






 







 The article from http://www.physorg.com/news148754667.html said:






 







 .a $4.9 million grant.for the first phase of DARPA's Systems of



 Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project.






 







 Tononi and scientists from Columbia University and IBM will work on the



 software for the thinking computer, while nanotechnology and



 supercomputing experts from Cornell, Stanford and the University of



 California-Merced will create the hardware. Dharmendra Modha of IBM is



 the principal investigator.






 







 The idea is to create a computer capable of sorting through multiple


 streams of changing data, to look for patterns and make logical 

decisions.





 







 There's another requirement: The finished cognitive computer should be



 as small as a the brain of a small mammal and use as little power as a



 100-watt light bulb. It's a major

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Richard Loosemore

Ed Porter wrote:
Richard, 


Please describe some of the counterexamples, that you can easily come up
with, that make a mockery of Tononi's conclusion.

Ed Porter


Alas, I will have to disappoint.  I put a lot of effort into 
understanding his paper first time around, but the sheer agony of 
reading (/listening to) his confused, shambling train of thought, the 
non-sequiteurs, and the pages of irrelevant math  that I do not need 
to experience a second time.  All of my original effort only resulted in 
the discovery that I had wasted my time, so I have no interest in 
wasting more of my time.


With other papers that contain more coherent substance, but perhaps what 
looks like an error, I would make the effort.  But not this one.


It will have to be left as an exercise for the reader, I'm afraid.



Richard Loosemore


P.S.   A hint.  All I remember was that he started talking about 
multiple regions (columns?) of the brain exchanging information with one 
another in a particular way, and then he asserted a conclusion which, on 
quick reflection, I knew would not be true of a system resembling the 
distributed one that I described in my consciousness paper (the 
molecular model).  Knowing that his conclusion was flat-out untrue for 
that one case, and for a whole class of similar systems, his argument 
was toast.











-Original Message-
From: Richard Loosemore [mailto:r...@lightlink.com] 
Sent: Monday, December 22, 2008 8:54 AM

To: agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke  was  Building a
machine that can learn from experience

Ed Porter wrote:
I don't think this AGI list should be so quick to dismiss a $4.9 million 
dollar grant to create an AGI.  It will not necessarily be vaporware. 
I think we should view it as a good sign.


 

Even if it is for a project that runs the risk, like many DARPA projects 
(like most scientific funding in general) of not necessarily placing its 
money where it might do the most good --- it is likely to at least 
produce some interesting results --- and it just might make some very 
important advances in our field.


 


The article from http://www.physorg.com/news148754667.html said:

 

.a $4.9 million grant.for the first phase of DARPA's Systems of 
Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project.


 

Tononi and scientists from Columbia University and IBM will work on the 
software for the thinking computer, while nanotechnology and 
supercomputing experts from Cornell, Stanford and the University of 
California-Merced will create the hardware. Dharmendra Modha of IBM is 
the principal investigator.


 

The idea is to create a computer capable of sorting through multiple 
streams of changing data, to look for patterns and make logical decisions.


 

There's another requirement: The finished cognitive computer should be 
as small as a the brain of a small mammal and use as little power as a 
100-watt light bulb. It's a major challenge. But it's what our brains do 
every day.


 

I have just spent several hours reading a Tononi paper, An information 
integration theory of consciousness and skimmed several parts of his 
book A Universe of Consciousness he wrote with Edleman, whom Ben has 
referred to often in his writings.  (I have attached my mark up of the 
article, which if you read just the yellow highlighted text, or (for 
more detail) the red, you can get a quick understanding of.  You can 
also view it in MSWord outline mode if you like.)


 

This paper largely agrees with my notion, stated multiple times on this 
list, that consciousness is an incredibly complex computation that 
interacts with itself in a very rich manner that makes it aware of itself.


For the record, this looks like the paper that I listened to Tononi talk 
about a couple of years ago -- the one I mentioned in my last message.


It is, for want of a better word, nonsense.  And since people take me to 
task for being so dismissive, let me add that it is the central thesis 
of the paper that is nonsense:  if you ask yourself very carefully 
what it is he is claiming, you can easily come up with counterexammples 
that make a mockery of his conclusion.




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Richard Loosemore

Ben Goertzel wrote:


Richard,

I'm curious what you think of William Calvin's neuroscience hypotheses 
as presented in e.g. The Cerebral Code


That book is a bit out of date now, but still, he took complexity and 
nonlinear dynamics quite seriously, so it seems to me there may be some 
resonance between his ideas and your own


I find his speculative ideas more agreeable than Tononi's, myself...

thx
ben g


Yes, I did read his book (or part of it) back in 98/99, but 

From what I remember, I found resonance, as you say, but he is one of 
those people who is struggling to find a way to turn an intuition into 
something concrete.  It is just that he wrote a book about it before he 
got to Concrete Operations.


It would be interesting to take a look at it again, 10 years later, and 
see whether my opinion has changed.


To put this in context, I felt like I was looking at a copy of myself 
back in 1982, when I struggled to write down my intuitions as a 
physicist coming to terms with psychology for the first time.  I am now 
acutely embarrassed by the naivete of that first attempt, but in spite 
of the embarrassment I know that I have since turned those intuitions 
into something meaningful, and I know that in spite of my original 
hubris, I was on a path to something that actually did make sense.  To 
cognitive scientists at the time it looked awful, unmotivated and 
disconnected from reality (by itself, it was!), but in the end it was 
not trash because it had real substance buried inside it.


With people like Calvin (and others) I see writings that look somewhat 
speculative and ungrounded, just like my early attempts, so I am mixed 
between a desire to be lenient (because I was that like that once) and a 
feeling that they really need to be aware that their thoughts are still 
ungelled.


Anyhow, that's my quick thoughts on him.  I'll see if I can dig out his 
book at some point.





Richard Loosemore










On Tue, Dec 23, 2008 at 9:53 AM, Richard Loosemore r...@lightlink.com 
mailto:r...@lightlink.com wrote:


Ed Porter wrote:

Richard,
Please describe some of the counterexamples, that you can easily
come up
with, that make a mockery of Tononi's conclusion.

Ed Porter


Alas, I will have to disappoint.  I put a lot of effort into
understanding his paper first time around, but the sheer agony of
reading (/listening to) his confused, shambling train of thought,
the non-sequiteurs, and the pages of irrelevant math  that I do
not need to experience a second time.  All of my original effort
only resulted in the discovery that I had wasted my time, so I have
no interest in wasting more of my time.

With other papers that contain more coherent substance, but perhaps
what looks like an error, I would make the effort.  But not this one.

It will have to be left as an exercise for the reader, I'm afraid.



Richard Loosemore


P.S.   A hint.  All I remember was that he started talking about
multiple regions (columns?) of the brain exchanging information with
one another in a particular way, and then he asserted a conclusion
which, on quick reflection, I knew would not be true of a system
resembling the distributed one that I described in my consciousness
paper (the molecular model).  Knowing that his conclusion was
flat-out untrue for that one case, and for a whole class of similar
systems, his argument was toast.









-Original Message-
From: Richard Loosemore [mailto:r...@lightlink.com
mailto:r...@lightlink.com] Sent: Monday, December 22, 2008 8:54 AM
To: agi@v2.listbox.com mailto:agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke  was 
Building a
machine that can learn from experience

Ed Porter wrote:

I don't think this AGI list should be so quick to dismiss a
$4.9 million dollar grant to create an AGI.  It will not
necessarily be vaporware. I think we should view it as a
good sign.

 
Even if it is for a project that runs the risk, like many

DARPA projects (like most scientific funding in general) of
not necessarily placing its money where it might do the most
good --- it is likely to at least produce some interesting
results --- and it just might make some very important
advances in our field.

 
The article from http://www.physorg.com/news148754667.html said:


 
.a $4.9 million grant.for the first phase of DARPA's

Systems of Neuromorphic Adaptive Plastic Scalable
Electronics (SyNAPSE) project.

 
Tononi and scientists from Columbia University and IBM will

work on the software for the thinking computer, while

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Richard Loosemore

Ben Goertzel wrote:


I know Dharmendra Mohdha a bit, and I've corresponded with Eugene 
Izhikevich who is Edelman's collaborator on large-scale brain 
simulations.  I've read Tononi's stuff too.  I think these are all smart 
people with deep understandings, and all in all this will be research 
money well spent.


However, there is no design for a thinking machine here.  There is 
cool work on computer simulations of small portions of the brain.


I find nothing to disrespect in the scientific work involved in this 
DARPA project.  It may not be the absolute most valuable research path, 
but it's a good one. 

However, IMO the rhetoric associating it with thinking machine 
building is premature and borderline dishonest.  It's marketing 
rhetoric.


I agree with this last paragraph wholeheartedly:  this is exactly what I 
meant when I said Neuroscience vaporware.


I also know Tononi's work, because I listened to him give a talk about 
consciousness once.  It was *computationally* incoherent.




Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Relevance of SE in AGI

2008-12-22 Thread Richard Loosemore

Valentina Poletti wrote:
I have a question for you AGIers.. from your experience as well as from 
your background, how relevant do you think software engineering is in 
developing AI software and, in particular AGI software? Just wondering.. 
does software verification as well as correctness proving serve any use 
in this field? Or is this something used just for Nasa and critical 
applications?


1) Software engineering (if we take that to mean the conventional 
repertoire of techniques taught as SE) is relevant to any project that 
gets up above a certain size, but it is less important when the project 
is much smaller, serves a more exploratory function, or where the design 
is constantly changing.  To this extent I agree with Pei's comments.


2) If you are looking beyond the idea of simply grabbing some SE 
techniques off the shelf, and are instead asking if SE can have an 
impact on AGI, then the answer is a dramatic Yes!.  Why?  Because 
tools determine the way that we *can* think about things.  Tools shape 
our thoughts.  They can sometimes enable us to think in new ways that 
were simply not possible before the tools were invented.


I decided a long time ago that if cognitive scientists had easy-to-use 
use tools that enabled them to construct realistic components of 
thinking systems, their entire style of explanation would be 
revolutionized.  Right now, cog sci people cannot afford the time to be 
both cog sci experts *and* sophisticated software developers, so they 
have to make do with programming that is, by and large, trivially 
simple.  This determines the kinds of models and explanations they can 
come up with.  (Ditto in spades for the neuroscientists, by the way).


So, the more global answer to your question is that nothing could be 
more important for AGI than software engineering.


The problem is, that the kind of software engineering we are talking 
about is not a matter of grabbing SE components off the shelf, but 
asking what the needs of cognitive scientists and AGIers might be, and 
then inventing new techniques and tools that will give these people the 
ability to think about intelligent systems in new ways.


That is why I am working on Safaire.





Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Relevance of SE in AGI

2008-12-22 Thread Richard Loosemore

Ben Goertzel wrote:


Well, we have attempted to use sound software engineering principles to 
architect the OpenCog framework, with a view toward making it usable for 
prototyping speculative AI ideas and ultimately building scalable, 
robust, mature AGI systems as well


But, we are fairly confident of our overall architecture with this 
system because there have been a number of predecessor systems based on 
similar principles, which we implemented and learned a lot from ...


If one has a new AGI idea and wants to start experimenting with it, SE 
is basically a secondary matter ... the point is to explore the 
algorithms and ideas by whatever means is less time-wasting and 
frustrating...


OTOH, if one has an AGI idea that's already been fleshed out a fair bit 
and one is ready to try to use it as the basis for a scalable, 
extensible system, SE is more worth paying attention to...


Premature attention to engineering when one should be focusing on 
science is a risk, but so is ignoring engineering when one wants to 
build a scalable, extensible system...


I think you missed my point, but no matter.

My point was that premature attention to engineering is absolutely 
vital in a field such as the cognitive science approach to AGI. 
Cognitive scientists simply do not have the time to be experts in 
cognitive science, AND software engineers at the same time.  Fort that 
reason, their models, and the way they think about theoretical models, 
are severely constrained by their weak ability to build software systems.


In this case, the science is being crippled by the lack of tools, so 
there is no such thing as premature attention to engineering.




Richard Loosemore









ben g

On Mon, Dec 22, 2008 at 9:03 AM, Richard Loosemore r...@lightlink.com 
mailto:r...@lightlink.com wrote:


Valentina Poletti wrote:

I have a question for you AGIers.. from your experience as well
as from your background, how relevant do you think software
engineering is in developing AI software and, in particular AGI
software? Just wondering.. does software verification as well as
correctness proving serve any use in this field? Or is this
something used just for Nasa and critical applications?


1) Software engineering (if we take that to mean the conventional
repertoire of techniques taught as SE) is relevant to any project
that gets up above a certain size, but it is less important when the
project is much smaller, serves a more exploratory function, or
where the design is constantly changing.  To this extent I agree
with Pei's comments.

2) If you are looking beyond the idea of simply grabbing some SE
techniques off the shelf, and are instead asking if SE can have an
impact on AGI, then the answer is a dramatic Yes!.  Why?  Because
tools determine the way that we *can* think about things.  Tools
shape our thoughts.  They can sometimes enable us to think in new
ways that were simply not possible before the tools were invented.

I decided a long time ago that if cognitive scientists had
easy-to-use use tools that enabled them to construct realistic
components of thinking systems, their entire style of explanation
would be revolutionized.  Right now, cog sci people cannot afford
the time to be both cog sci experts *and* sophisticated software
developers, so they have to make do with programming that is, by and
large, trivially simple.  This determines the kinds of models and
explanations they can come up with.  (Ditto in spades for the
neuroscientists, by the way).

So, the more global answer to your question is that nothing could be
more important for AGI than software engineering.

The problem is, that the kind of software engineering we are talking
about is not a matter of grabbing SE components off the shelf, but
asking what the needs of cognitive scientists and AGIers might be,
and then inventing new techniques and tools that will give these
people the ability to think about intelligent systems in new ways.

That is why I am working on Safaire.





Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org mailto:b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx


*agi* | Archives https://www.listbox.com/member/archive/303/=now 
https://www.listbox.com/member/archive/rss/303/ | Modify 
https://www.listbox.com/member/?; 
Your Subscription	[Powered by Listbox] http

Robots with a sense of touch [WAS Re: [agi] AGI Preschool....]

2008-12-20 Thread Richard Loosemore

Philip Hunt wrote:

2008/12/20 Ben Goertzel b...@goertzel.org:

Well, there is massively more $$ going into robotics dev than into AGI dev,
and no one seems remotely near to solving the hard problems

Which is not to say it's a bad area of research, just that it's a whole
other huge confusing RD can of worms

So I still say, the choices are

-- virtual embodiment, as I advocate

-- delay working on AGI for a decade or so, and work on robotics now instead
(where by robotics I include software work on low-level sensing and actuator
control)

Either choice makes sense but I prefer the former as I think it can get us
to the end goal faster.


That makes sense


But, with actuation, I'm not so sure.  The almost total absence of touch and
kinesthetics in current robots is a huge impediment, and puts them at a huge
disadvantage relative to humans.


Good point.

I wonder how easy it would be to provide a robot with a sensor that
gives a sense of touch? maybe something the thickness of a sheet of
paper, with horizontal and vertical wires criss-crossing it, and the
wires not electrically connected would work, if there was a difference
in capacitance when the wires where further apart or closer together.



How about:

http://www.geekologie.com/2006/06/nanoparticles_give_robots_prec.php

or

http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=163701010




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Cross-Cultural Discussion using English [WAS Re: [agi] Creativity ...]

2008-12-19 Thread Richard Loosemore

Ben Goertzel wrote:


yeah ... that's not a matter of the English language but rather a matter 
of the American Way ;-p


Through working with many non-Americans I have noted that what Americans 
often intend as a playful obnoxiousness is interpreted by 
non-Americans more seriously...


Except that, in fact, Mike is not American but British.

As a result of long experience talking to Americans, I have discovered 
that what British people intend as routine discussion, Americans 
interpret as serious, intentional obnoxiousness.  And then, what 
Americans (as you say) intend as playful obnoxiousness, non-Americans 
interpret more seriously.




Richard Loosemore







I think we had some mutual colleagues in the past who favored such a 
style of discourse ;-)


ben

On Fri, Dec 19, 2008 at 1:49 PM, Pei Wang mail.peiw...@gmail.com 
mailto:mail.peiw...@gmail.com wrote:


On Fri, Dec 19, 2008 at 1:40 PM, Ben Goertzel b...@goertzel.org
mailto:b...@goertzel.org wrote:
 
  IMHO, Mike Tintner is not often rude, and is not exactly a
troll because I
  feel he is genuinely trying to understand the deeper issues
related to AGI,
  rather than mainly trying to stir up trouble or cause irritation

Well, I guess my English is not good enough to tell the subtle
difference in tones, but his comments often sound that You AGIers are
so obviously wrong that I don't even bother to understand what you are
saying ... Now let me tell you 

I don't enjoy this tone.

Pei





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: Cross-Cultural Discussion using English [WAS Re: [agi] Creativity ...]

2008-12-19 Thread Richard Loosemore

Pei Wang wrote:

Richard and Ben,

If you think I, as a Chinese, have overreacted to Mike Tintner's
writing style, and this is just a culture difference, please let me
know. In that case I'll try my best to learn his way of communication,
at least when talking to British and American people --- who knows, it
may even improve my marketing ability. ;-)

Pei


No, no:  I seriously do not think you have overreacted at all.

I meant my comment half in jest:  Mike has some unique abilities to rub 
people up the wrong way on this list, quite separate from the fact that 
he is British.  The latter is an exacerbating factor, is all.




Richard Loosemore




On Fri, Dec 19, 2008 at 7:01 PM, Ben Goertzel b...@goertzel.org wrote:

And when a Chinese doesn't answer a question, it usually means No ;-)

Relatedly, I am discussing with some US gov't people a potential project
involving customizing an AI reasoning system to emulate the different
inferential judgments of people from different cultures...

ben

On Fri, Dec 19, 2008 at 5:29 PM, Richard Loosemore r...@lightlink.com
wrote:

Ben Goertzel wrote:

yeah ... that's not a matter of the English language but rather a matter
of the American Way ;-p

Through working with many non-Americans I have noted that what Americans
often intend as a playful obnoxiousness is interpreted by non-Americans
more seriously...

Except that, in fact, Mike is not American but British.

As a result of long experience talking to Americans, I have discovered
that what British people intend as routine discussion, Americans interpret
as serious, intentional obnoxiousness.  And then, what Americans (as you
say) intend as playful obnoxiousness, non-Americans interpret more
seriously.



Richard Loosemore








I think we had some mutual colleagues in the past who favored such a
style of discourse ;-)

ben

On Fri, Dec 19, 2008 at 1:49 PM, Pei Wang mail.peiw...@gmail.com
mailto:mail.peiw...@gmail.com wrote:

   On Fri, Dec 19, 2008 at 1:40 PM, Ben Goertzel b...@goertzel.org
   mailto:b...@goertzel.org wrote:

 IMHO, Mike Tintner is not often rude, and is not exactly a
   troll because I
 feel he is genuinely trying to understand the deeper issues
   related to AGI,
 rather than mainly trying to stir up trouble or cause irritation

   Well, I guess my English is not good enough to tell the subtle
   difference in tones, but his comments often sound that You AGIers are
   so obviously wrong that I don't even bother to understand what you are
   saying ... Now let me tell you 

   I don't enjoy this tone.

   Pei




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx


agi | Archives | Modify Your Subscription



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Richard Loosemore

Ben Goertzel wrote:
 
Colin,
 
It is of course possible that human intelligence relies upon 
electromagnetic-field sensing that goes beyond the traditional five 
senses.


OR, it might all be a quantum multicosmic phenomenon that is best 
explained with a dose of Evenedrician Datonomy


|-)





Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Building a machine that can learn from experience

2008-12-18 Thread Richard Loosemore

Rafael C.P. wrote:

Cognitive computing: Building a machine that can learn from experience
http://www.physorg.com/news148754667.html


Neuroscience vaporware.



Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] CopyCat

2008-12-17 Thread Richard Loosemore

Vladimir Nesov wrote:

On Wed, Dec 17, 2008 at 6:03 PM, Ben Goertzel b...@goertzel.org wrote:

I happened to use CopyCat in a university AI class I taught years ago, so I
got some experience with it

It was **great** as a teaching tool, but I wouldn't say it shows anything
about what can or can't work for AGI, really...



CopyCat gives a general feel of self-assembling representation and
operations performed on reflexive level.  It captures intuitions about
high-level perception better than any other self-contained description
I've seen (which is rather sad, especially given that CopyCat only
touches on using hand-made shallow multilevel representations, without
inventing them, without learning). Some of the things happening in my
model of high-level representation (on the rights of description of
what's happening, not as elements of model itself) can be naturally
described using lexicon from CopyCat (slippages, temperature,
salience, structural analogy), even though algorithm on the low level
is different.



I agree with your sentiments about CopyCat (and its cousins).  It is not 
so much that it delivers specific performance by itself, so much as it 
is a different way to think about how to do such things:  an inspiration 
for a whole class of models.  It is certainly part of the inspiration 
for my system.


Sounded to me like Ben's initial disparaging remarks about CopyCat were 
mostly the result of a BHDE (a Bad Hair Day Event).  It *really* is not 
that useless.





Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] Lamarck Lives!(?)

2008-12-11 Thread Richard Loosemore

Eric Burton wrote:

It's all a big vindication for genetic memory, that's for certain. I
was comfortable with the notion of certain templates, archetypes,
being handed down as aspects of brain design via natural selection,
but this really clears the way for organisms' life experiences to
simply be copied in some form to their offspring. DNA form!

It is scary to imagine memes scribbling on your genome in this way.
Food for thought! :O


Well, no: that was not the conclusion that we came to during this thread.

I think we all agreed that although we could imagine ways in which some 
acquired information could be passed on through the DNA, the *current* 
evidence does not indicate that large scale transfer of memories is 
happening.


In effect, the recent discoveries might conceivably allow nature to hand 
over to the next generation a 3.5 inch floppy disk (remember those?) 
with some data on it, whereas the implication in what you just said was 
that this floppy disk could be used to transfer the contents of the 
Googleplex :-).  Not so fast, I say.





Richard Loosemore








On 12/11/08, Terren Suydam ba...@yahoo.com wrote:

After talking to an old professor of mine, it bears mentioning that
epigenetic mechanisms such as methylation and histone remodeling are not the
only means of altering transcription. A long established mechanism involves
phosphorylation of transcription factors in the neuron (phosphorylation is a
way of chemically enabling or disabling the function of a particular
enzyme).

In light of that I think there is some fuzziness around the use of
epigenetic here because you could conceivably consider the above
phosphorylation mechanism as epigenetic - functionally speaking, the
effect is the same - an increase or decrease in transcription. The only
difference between that and methylation etc is transience: phosphorylation
of transcription factors is less permanent then altering the DNA.

He also shed some light on the effects on synapses due to epigenetic
mechanisms. Ed, you were wondering how synapse-specific changes could occur
in response to transcription mechanisms (which are central to the neuron).
Specifically: There are 2 possible answers to that puzzle
(that I am aware of);  1) evidence of mRNA and translation machinery
present in dendrites at the site of synapses (see papers published by Oswald
Steward or 2) activity causes a specific synapse to be 'tagged' so that
newly synthesized proteins in the cell body are targeted specifically to the
tagged synapses.

Terren

--- On Thu, 12/11/08, Ed Porter ewpor...@msn.com wrote:
From: Ed Porter ewpor...@msn.com
Subject: FW: [agi] Lamarck Lives!(?)
To: agi@v2.listbox.com
Date: Thursday, December 11, 2008, 10:32 AM

I














To save you the trouble the most relevant
language from the below cited article is





While scientists don't yet know exactly
how epigenetic regulation affects memory, the theory is that certain
triggers,
such as exercise, visual stimulation, or drugs, unwind DNA, allowing
expression
of genes involved in neural plasticity. That increase in gene expression
might
trigger development of new neural connections and, in turn, strengthen the
neural circuits that underlie memory formation. Maybe our brains are
using these epigenetic mechanisms to allow us to learn and remember things,
or
to provide sufficient plasticity to allow us to learn and adapt, says John
Satterlee, program director of epigenetics at the National
Institute on Drug Abuse, in Bethesda, MD.

We
have solid evidence that HDAC inhibitors massively promote growth of
dendrites
and increase synaptogenesis [the creation of connections between
neurons], says Tsai. The process may boost memory or allow mice to regain
access to lost memories by rewiring or repairing damaged neural circuits.
We believe the memory trace is still there, but the animal cannot
retrieve it due to damage to neural circuits, she adds. 



-Original Message-

From: Ed Porter
[mailto:ewpor...@msn.com]

Sent: Thursday,
 December 11, 2008 10:28 AM

To: 'agi@v2.listbox.com'

Subject: FW: [agi] Lamarck
Lives!(?)



An article related to how changes in the
epigenonme could affect learning and memory (the subject which started this
thread a week ago)





http://www.technologyreview.com/biomedicine/21801/














  agi | Archives

 | Modify
 Your Subscription















---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com






---
agi
Archives

[agi] Religious attitudes to NBIC technologies

2008-12-08 Thread Richard Loosemore


Another indication that we need to take the public relations issue very 
seriously indeed:  as time passes, this problem of the public attitude 
(and especially the religious attitude) to NBIC technologies will only 
become more extreme:


http://news.bbc.co.uk/2/hi/science/nature/7767192.stm



Richard Loosemore




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Seeking CYC critiques

2008-12-06 Thread Richard Loosemore

Steve Richfield wrote:

Matt,

On 12/6/08, *Matt Mahoney* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


--- On Sat, 12/6/08, Steve Richfield [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:

  Internet AGIs are the technology of the future, and always will
be. There will NEVER EVER in a million years be a thinking Internet
silicon intelligence that will be able to solve substantial
real-world problems based only on what exists on the Internet. I
think that my prior email was pretty much a closed-form proof of
that. However, there are MUCH simpler methods that work TODAY, given
the metadata that is presently missing from the Internet.

The internet has about 10^14 to 10^15 bits of knowledge as
searchable text. AGI requires 10^17 to 10^18 bits.

 
This presumes that there isn't some sort of agent at work that filters 
a particular important type of information, so that even a googol of 
text wouldn't be any closer. As I keep explaining, that agent is there 
and working well, to filter the two things that I keep mentioning. 
Hence, you are WRONG here.


If we assume that the internet doubles every 1.5 to 2 years with
Moore's Law, then we should have enough knowledge in 15-20 years.

 
Unfortunately, I won't double my own postings, and few others will 
double their own output. Sure, there will be some additional enlargement 
of the Internet, but its growth is linear once past its introduction, 
which we are, and short of exponential growth of population, which is on 
a scale of a century or so. In short, Moore's law simply doesn't apply 
here, any more than 9 women can make a baby in a month.


However, much of this new knowledge is video, so we also need to
solve vision and speech along with language.

 
Which of course has been stymied by the lack of metadata - my point all 
along.


  While VERY interesting, your proposal appears to leave the
following important questions unanswered:
  1.  How is it an AGI? I suppose this is a matter of definitions.
It looks to me more like a protocol.

AGI means automating the economy so we don't have to work. It means
not just solving the language and vision problems, but also training
the equivalent of 10^10 humans to make money for us. After hardware
costs come down, custom training for specialized roles will be the
major expense. I proposed surveillance as the cheapest way for AGI
to learn what we want. A cheaper alternative might be brain
scanning, but we have not yet developed the technology. (It will be
worth US$1 quadrillion if you can do it).

Or another way to answer your question, AGI is a lot of dumb
specialists plus an infrastructure to route messages to the right
experts.

 
I suspect that your definition here is unique. Perhaps other on this 
forum would like to proclaim which of us is right/wrong.


Since you ask, the two of you seem to be competing for the prize of 
largest number of most diabolically nonsensical comments in the shortest 
amount of time.


You *did* ask.




I thought that 
the definition more or less included an intelligent *_computer_*.


  2.  As I explained earlier on this thread, all human-human
languages have severe semantic limitations, such that (applying this
to your porposal), only very rarely will there ever exist an answer
that PRECISELY answers a question, so some sort of acceptable
error must go into the equation. In the example you used in your
paper, Jupiter is NOT the largest planet that is known, as the
astronomers have identified larger planets in other solar systems.
There may be a good solution to this, e.g. provide the 3 best
answers that are semantically disjoint.

People communicate in natural language 100 to 1000 times faster than
any artificial language, in spite of its supposed limitations.
Remember that the limiting cost is transferring knowledge from human
brains to AGI, 10^17 to 10^18 bits at 2 bits per second per person.

 
Unfortunately, when societal or perceptual filters are involved, there 
will remain HUGE holes in even an infinite body of data. Of course, our 
society has its problems precisely because of those holes, so more data 
doesn't necessarily get you any further.


As for Jupiter, any question you ask is going to get more than one
answer. This is not a new problem.
http://www.google.com/search?q=what+is+the+largest+planet%3F

In my proposal, peers compete for reputation and have a financial
incentive to provide useful information to avoid being blocked or
ignored in an economy where information has negative value.

 
Great! At least that way, I know that the things I see will be good 
Christian content.


This is why it is important for an AGI protocol to provide for
secure authentication.

  3.  Your paper addresses question answering, which as I have
explained here in the 

[agi] Lamarck Lives!(?)

2008-12-03 Thread Richard Loosemore


Am I right in thinking that what these people:

http://www.newscientist.com/article/mg20026845.000-memories-may-be-stored-on-your-dna.html


are saying is that memories can be stored as changes in the DNA inside 
neurons?


If so, that would upset a few apple carts.

Would it mean that memories (including cultural adaptations) could be 
passed from mother to child?


Implication for neuroscientists proposing to build a WBE (whole brain 
emulation):  the resolution you need may now have to include all the DNA 
in every neuron.  Any bets on when they will have the resolution to do that?




Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Lamarck Lives!(?)

2008-12-03 Thread Richard Loosemore



Interesting.

Note, however, that it is conceivable that those other examples of plant 
and bacterial adaptation could be explained as situation-specific - in 
the sense that the particular cause of the adaptation could have worked 
in ways that were not generalizable to other, similar factors.  So, some 
very specific factors could be inherited while others could never have 
an effect because they just don't happen to affect methylation.


But if the neural results hold up, this would be a whole new ball game: 
 a completely general mechanism for storing memories in an inheritable 
form.  Not just [memory-for-your-first-kiss] affecting the DNA, but the 
whole shebang.


If it turns out that this is the correct interpretation, then this is 
one hell of a historic moment.


I must say, I am still a little skeptical, but we'll see how it plays out.


Richard Loosemore




Ben Goertzel wrote:

Note also,

http://sciencelinks.jp/j-east/article/200308/20030803A0129895.php

Jean Baptiste de Lamarck (1744-1829) maintained that characteristics
that were acquired during an organism's lifetime are passed on to its
offspring. This theory, known as Lamarckian inheritance, was later
completely discredited. However, recent progress in epigenetics
research suggests it needs to be reexamined in consideration of DNA
methylation. In this article, I summarize our observations, which
support Lamarckian inheritance. Initial experiments indicate that (1)
artificially induced demethylation of rice genomic DNA results in
heritable dwarfism, and (2) cold stress induces extensive
demethylation in somatic cells of the maize root. Based on these
results, I propose the hypothesis that traits that are acquired during
plant growth are sometimes inherited by their progeny through
persistent alteration of the DNA methylation status. (author abst.)

I wonder how this relates to adaptive mutagenesis

http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1206667

which has been rather controversial

http://www.genetics.org/cgi/content/full/165/4/2319

ben




On Wed, Dec 3, 2008 at 11:11 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

Am I right in thinking that what these people:

http://www.newscientist.com/article/mg20026845.000-memories-may-be-stored-on-your-dna.html


are saying is that memories can be stored as changes in the DNA inside
neurons?

If so, that would upset a few apple carts.

Would it mean that memories (including cultural adaptations) could be passed
from mother to child?

Implication for neuroscientists proposing to build a WBE (whole brain
emulation):  the resolution you need may now have to include all the DNA in
every neuron.  Any bets on when they will have the resolution to do that?



Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com









---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Lamarck Lives!(?)

2008-12-03 Thread Richard Loosemore

Harry Chesley wrote:

On 12/3/2008 8:11 AM, Richard Loosemore wrote:

 Am I right in thinking that what these people:


http://www.newscientist.com/article/mg20026845.000-memories-may-be-stored-on-your-dna.html 



 are saying is that memories can be stored as changes in the DNA
 inside neurons?

 If so, that would upset a few apple carts.


Yes, but it obviously needs a lot more confirmation first. :-)


 Would it mean that memories (including cultural adaptations) could be
 passed from mother to child?


No. As far as I understand it, they are proposing changes to the DNA in 
the neural cells only, so it wouldn't be passed on. And I would expect 
that the changes are specific to the neural structure of the subject, so 
even if you moved the changes to DNA in another subject, it wouldn't 
work.


You're right, of course.

But if this holds up, it would not be quite so crazy to imagine a 
mechanism that uses junk DNA signalling to get the end caps of the 
genital DNA to reflect the changes.


I admit, though, this is stretching it a bit ;-).

As for the changes not working in another subject:  yes, it would 
probably be the case that specific memories are encoded in an 
individual-specific way.  But what about more general factors?  What if 
there were some primitive types of musical understanding, say, that were 
common across individuals, for example?  Like, a set of very primitive 
concepts having to do with links between sounds and finger movements? 
If such general factors could be passed across, a person could inherit 
above average musical ability because their parents had been active 
musicians all their lives.


All this is fun to think about, but I confess I am mostly playing 
devil's advocate here.



 Implication for neuroscientists proposing to build a WBE (whole brain
 emulation):  the resolution you need may now have to include all the
 DNA in every neuron.  Any bets on when they will have the resolution
 to do that?


No bets here. But they are proposing that elements are added onto the 
DNA, not that changes are made in arbitrary locations within the DNA, so 
it's not /quite/ as bad as you suggest


It would be pretty embarrassing for people gearing up for scans with a 
limiting resolution at about the size of one neuron, though.  IIRC that 
was the rough order of magnitude assumed in the proposal I reviewed here 
recently.




Richard Loosemore





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Lamarck Lives!(?)

2008-12-03 Thread Richard Loosemore

Philip Hunt wrote:

2008/12/3 Richard Loosemore [EMAIL PROTECTED]:

http://www.newscientist.com/article/mg20026845.000-memories-may-be-stored-on-your-dna.html

are saying is that memories can be stored as changes in the DNA inside
neurons?


No. They are saying memories might be stored as changes *on* the DNA.

Imagine a big long DNA molecule. It has little molecules attached to
bits of it, which regulate which genes are and aren't expressed.
That's how a cell knows it's a skin cell, or an eye cell or a liver
cell. Apparently the same mechanism is used in neurons are part of the
mechanism for laying down new memories.


Yes, I know this:  I appreciate the difference between tampering with 
the gene regulation apparatus and affecting the codons themselves, but 
for my money, *any* mechanism that collects synaptic signals (to speak 
very broadly) and then walks over to some DNA and does anything 
systematic to the DNA, to record the results of those signals, is 
storing something on the DNA.  There could have been no way to get from 
one to the other, but now it appears that there is.




Would it mean that memories (including cultural adaptations) could be passed
from mother to child?


No, for two reasons: (1) the DNA isn't being changed. (2) even if the
DNA was being changed, it isn't in the germ-line.


This is a crucial point:  has anyone definitely ruled out the 
possibility that state of the gene regulation apparatus could somehow 
affect the germ line?


This I am not clear about.  When the Mom and Pop DNA really start to get 
down and boogie together, do they throw away the scratchpad that 
contains all the extra information about the state of the junk DNA, the 
methylation endcaps, etc?  Or is it still an open question whether some 
of that can carry over?





Richard Loosemore




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Lamarck Lives!(?)

2008-12-03 Thread Richard Loosemore

Terren Suydam wrote:

Hi Richard,

Thanks for the link, pretty intriguing. It's important to note that
the mechanism proposed is just a switch that turns specific genes
off... so properly understood, it's likely that the resolution
required to model this mechanism would not necessarily require
modeling the entire DNA strand. It seems more likely that these
methylation caps are being applied to very specific genes that
produce proteins heavily implicated in the dynamics of synapse
creation/destruction (or some other process related to memory).  So
modeling the phenomenon could very possibly be done functionally.

Memories could only be passed to the child if 1) those DNA changes
were also made in the germ cells (i.e. egg/sperm) and 2) the DNA
changes involved resulted in a brain organization in the child that
mimicked the parent's brain.  (1) is very unlikely but theoretically
possible; (2) is impossible for two reasons. One is, the methylation
patterns proposed involve a large number of neurons, converging on a
pattern of methylation; in contrast, a germ cell would only capture
the methylation of a single cell (which would then be cloned in the
developing fetus). Second, the hypothesized methylation patterns
represent a different medium of information storage in the mature
brain than what is normally considered to be the role of DNA in the
developing brain. It would truly be a huge leap to suggest that the
information stored via this alteration of DNA would result in that
information being preserved somehow in a developing brain.

There are plenty of other epigenetic phenomena to get Lamarck fans
excited, but this isn't one of them.


I see what you are saying.

I really want to distance myself from this a little bit (don't want to 
seem like I am really holding the banner for Lamarck's crowd), but I 
think the main conclusion that we can draw from this piece of research 
is, as I said a moment ago, that we now have reason to believe that 
there is *some* mechanism that connects memories to DNA modifications, 
whereas if anyone had suggested such a link a few years ago they would 
have been speculating on thin ice.


I definitely agree that getting from there to a situation in which 
packages of information are being inserted into germ cell DNA is a long 
road, but this one new piece of research has - surprisingly - just cut 
the length of that road in half.


All fun and interesting, but now back to the real AGI




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Lamarck Lives!(?)

2008-12-03 Thread Richard Loosemore

Terren Suydam wrote:

I definitely agree that getting from there to a situation in which
packages of information are being inserted into germ cell DNA is a
long road, but this one new piece of research has - surprisingly -
just cut the length of that road in half.


Half of infinity is still infinity ;-]

It's just not a possibility, which should be obvious if you look at
the quantity of information involved. Let M be a measure of the
information stored via distributed methylation patterns across some
number of neurons N. The amount of information stored by a single
neuron's methylated DNA is going to be much smaller than M (roughly
M/N). A single germ cell which might conceivably inherit the
methylation pattern from some single neuron would not be able to
convey any more than a [1/N] piece of the total information that
makes up M.


Now you're just trying to make me think ;-).

Okay, try this.

[heck, you don't have to:  I am just playing with ideas here...]

The methylation pattern has not necessarily been shown to *only* store 
information in a distributed pattern of activation - the jury's out on 
that one (correct me if I'm wrong).


Suppose that the methylation end caps are just being used as a way 
station for some mechanism whose *real* goal is to make modifications to 
 some patterns in the junk DNA.  So, here I am suggesting that the junk 
DNA of any particular neuron is being used to code for large numbers of 
episodic memories (one memory per DNA strand, say), with each neuron 
being used as a redundant store of many episodes.  The same episode is 
stored in multiple neurons, but each copy is complete.  When we observe 
changes in the methylation patterns, perhaps these are just part of the 
transit mechanism, not the final destination for the pattern.  To put it 
in the language that Greg Bear would use, the endcaps were just part of 
the radio system. (http://www.gregbear.com/books/darwinsradio.cfm)


Now suppose that part of the junk sequences that code for these memories 
are actually using a distributed coding scheme *within* the strand (in 
the manner of a good old fashioned backprop neural net, shall we say). 
That would mean that, contrary to what I said in the above paragraph, 
the individual strands were coding a bunch of different episodic memory 
traces, not just one.


(It is even possible that the old idea of flashbulb memories may survive 
the critiques that have been launched against it ... and in that case, 
it could be that what we are talking about here is the mechanism for 
storing that particular set of memories.  And in that case, perhaps the 
system expects so few of them, that all DNA strands everywhere in the 
system are dedicated to storing just the individual's store of flashbulb 
memories).


Now, finally, suppose that there is some mechanism for radioing these 
memories to distribute them around the system ... and that the radio 
network extends as far as the germ DNA.


Now, the offspring could get the mixed flashbulb memories of its 
parents, in perhaps very dilute or noisy form.


This assumes that whatever coding scheme is used to store the 
information can somehow transcend the coding schemes used by different 
individuals.  Since we do not yet know how much common ground there is 
between the knowledge storage used by individuals yet, this is still 
possible.


There:  I invented a possible mechanism.

Does it work?





Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Lamarck Lives!(?)

2008-12-03 Thread Richard Loosemore

Ben Goertzel wrote:

I know you're just playing here but it would be easy to empirically test this. 
Does junk DNA change between birth and death? Something tells me we would have 
discovered something that significant a long time ago.

Terren


well, loads of mutations occur in nuclear DNA between birth and death;
this is part of how aging occurs.

There are specific DNA repair mechanisms that fix mutation errors that
occur during the cell's lifetime

It seems quite plausible that these repair mechanisms might work
differently on coding and noncoding regions of the DNA



Ah, hang on folks:  what I was meaning was that the *state* of the junk 
DNA was being used, not the code.


I am referring to the stuff that is dynamically interacting, as a result 
of which genes are switched on and off all over the place  so this 
is a gigantic network of switches.


I wouldn't suggest that something is snipping and recombining the actual 
code of the junk DNA, only that the state of the switches is being 
used to code for something.


Question is: can the state of the switches be preserved during reproduction?



Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] AIXI (was: Mushed Up Decision Processes)

2008-11-30 Thread Richard Loosemore

Philip Hunt wrote:

2008/11/29 Matt Mahoney [EMAIL PROTECTED]:

The general problem of detecting overfitting is not computable. The
principle according to Occam's Razor, formalized and proven by
Hutter's AIXI model, is to choose the shortest program (simplest
hypothesis) that generates the data. Overfitting is the case of
choosing a program that is too large.



Can someone explain AIXI to me? My understanding is that you've got 
some black-box process emitting output, and you generate all possible

 programs that emit the same output, then choose the shortest one.
You then run this program and its subsequent output is what you
predict the black-box process will do. This has the minor drawback,
of course, that it requires infinite processing power and is
therefore slightly impractical.

I've read Hutter's paper Universal algorithmic intelligence, A 
mathematical top-down approach which amusingly describes itself as 
a gentle introduction to the AIXI model.


Hutter also describes AIXItl of computation time Ord(t*2^L) where I 
assume L is the length of the program and I'm not sure what t is. Is 
AIXItl something that could be practically written or is it purely a 
theoretical construct?


In short, is there something to AIXI or is it something I can safely
ignore?



It is something that, if you do not ignore it, will waste every second
of brain cpu time that you devote to it ;-).

Matt comes has a habit of repeating some version of the above statement 
... according to Occam's Razor, [which was] formalized and proven by 
Hutter's AIXI model... on a semi-periodic basis.  The first n times I 
took the trouble to explain why this statement is nonsense.  Now I don't 
bother.


AIXI is mathematical abstraction taken to the point of absurdity and 
beyond.  By introducing infinite numbers of copies of all possible 
universes into your formalism, and by implying that functions can be 
computed on such structures, and by redefining common terms like 
intelligence to be abstractions based on that formalism, you can prove 
anything under the sun.


That fact seems to escape some people.



Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-25 Thread Richard Loosemore

Ben Goertzel wrote:

Richard,

It might be more useful to discuss more recent papers by the same
authors regarding the same topic, such as the more accurately-titled

***
Sparse but not Grandmother-cell coding in the medial temporal lobe.
Quian Quiroga R, Kreiman G, Koch C and Fried I.
Trends in Cognitive Sciences. 12: 87-91; 2008
***

at

http://www2.le.ac.uk/departments/engineering/extranet/research-groups/neuroengineering-lab/



There are always more papers that can be discussed.

But that does not change the fact that we provided arguments to back up 
our claims, when we analyzed the original Quiroga et al paper, and all 
the criticism directed against our paper on this list, in the last week 
or so, has completely ignored the actual content of that argument.







Richard Loosemore











On Mon, Nov 24, 2008 at 1:32 PM, Richard Loosemore [EMAIL PROTECTED] wrote:

Ben Goertzel wrote:

Hi,

BTW, I just read this paper



For example, in Loosemore  Harley (in press) you can find an analysis of
a
paper by Quiroga, Reddy, Kreiman, Koch, and Fried (2005) in which the
latter
try to claim they have evidence in favor of grandmother neurons (or
sparse
collections of grandmother neurons) and against the idea of distributed
representations.

which I found at

 http://www.vis.caltech.edu/~rodri/

and I strongly disagree that


We showed their conclusion to be incoherent.  It was deeply implausible,
given the empirical data they reported.


The claim that Harley and I made - which you quote above - was the
*conclusion* sentence that summarized a detailed explanation of our
reasoning.

That reasoning was in our original paper, and I also went to the trouble of
providing a longer version of it in one of my last posts on this thread.  I
showed, in that argument, that their claims about sparse vs distributed
representations were incoherent, because they had not thought through the
implications contained in their own words - part of which you quote below.

Merely quoting their words again, without resolving the inconsistencies that
we pointed out, proves nothing.

We analyzed that paper because it was one of several that engendered a huge
amount of publicity.  All of that publicity - which, as far as we can see,
the authors did not have any problem with - had to do with the claims about
grandmother cells, sparseness and distributed representations.  Nobody - not
I, not Harley, and nobody else as far as I know - disputes that the
empirical data were interesting, but that is not the point:  we attacked
their paper because of their conclusion about the theoretical issue of
sparse vs distributed representations, and the wider issue about grandmother
cells.  In that context, it is not true that, as you put it below, the
authors only [claimed] to have gathered some information on empirical
constraints on how neural knowledge representation may operate.  They went
beyond just claiming that they had gathered some relevant data:  they tried
to say what that data implied.



Richard Loosemore








Their conclusion, to quote them, is that


How neurons encode different percepts is one of the most intriguing
questions in neuroscience. Two extreme hypotheses are
schemes based on the explicit representations by highly selective
(cardinal, gnostic or grandmother) neurons and schemes that rely on
an implicit representation over a very broad and distributed population
of neurons1–4,6. In the latter case, recognition would require the
simultaneous activation of a large number of cells and therefore we
would expect each cell to respond to many pictures with similar basic
features. This is in contrast to the sparse firing we observe, because
most MTL cells do not respond to the great majority of images seen
by the patient. Furthermore, cells signal a particular individual or
object in an explicit manner27, in the sense that the presence of the
individual can, in principle, be reliably decoded from a very small
number of neurons.We do not mean to imply the existence of single
neurons coding uniquely for discrete percepts for several reasons:
first, some of these units responded to pictures of more than one
individual or object; second, given the limited duration of our
recording sessions, we can only explore a tiny portion of stimulus
space; and third, the fact that we can discover in this short time some
images—such as photographs of Jennifer Aniston—that drive the
cells suggests that each cell might represent more than one class of
images. Yet, this subset of MTL cells is selectively activated by
different views of individuals, landmarks, animals or objects. This
is quite distinct from a completely distributed population code and
suggests a sparse, explicit and invariant encoding of visual percepts in
MTL.


The only thing that bothers me about the paper is that the title


Invariant visual representation by single neurons in
the human brain


does not actually reflect the conclusions drawn.  A title like


Invariant visual

Re: [agi] who is going to build the wittgenstein-ian AI filter to spot all the intellectual nonsense

2008-11-25 Thread Richard Loosemore

Tudor Boloni wrote:
we invariably generate and then fruitlessly explore (our field is even 
more exposed to this than most others) until we come up against the 
limits of our own language, and defeated and fatigued realize we never 
thought the questions through. i nominate this guy:


http://hyperlogic.blogspot.com/

at a minimum wittgenstein's Brown Book should be required reading for 
all AGI list members


Read it.  Along with pretty much everything else he wrote (that is in 
print, anyhow).


Calling things a category error is a bit of a cop out.




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] who is going to build the wittgenstein-ian AI filter to spot all the intellectual nonsense

2008-11-25 Thread Richard Loosemore

Tudor Boloni wrote:
wrong category is trivial indeed, but quickly removing computing 
resources from impossible processes can be a great benefit to any 
system, and an incredible benefit if the system learns to spot deeply 
nonsensical problems in advance of dedicating almost any resources to 
it... what if we could design a system that by its very structuring 
couldnt even generate these wittgensteinian deep errors... also, as far 
it being a cop out, i disagree it clears the mind to the deepest levels 
allowing a springwell of clarity that shows other answers in record time 
and accuracy, an example: minsky points to the same stupidity of asking 
the question of what is consciousness, preferring to just look for 
stimuli/behavior rules that are required to survive and act, and letting 
others worry about how many of those rules make up their version of the 
word conscious...   


The problem with this is, that what seemed to Wittgenstein and Minsky 
(when they had their Philosophical Behaviorist hats on) as just 
meaningless words that referred to nothing (e.g. consciousness) may well 
turn out to have deeper and more interesting structure than they 
thought.  For example, they could not, in principle, answer any 
questions about the practical effects of the various manipulations that 
I proposed in my recent paper.  And yet, it turns out that I can make 
predictions about how the subjective experience of people would be 
affected by these manipulations:  pretty good work for something that is 
labelled by W  M as a non-concept!


My point of course, is that they were wrong about some of the specific 
things that would be a waste of time for an AGI to think about.


They were right in principle to say that some questions are framed badly 
(as in, But now show me where the University is!), but it would be 
dangerous to assume that we can sort the wheat from the chaff and get it 
right every time, no?





Richard Loosemore





On Tue, Nov 25, 2008 at 3:46 PM, Richard Loosemore [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Tudor Boloni wrote:

we invariably generate and then fruitlessly explore (our field
is even more exposed to this than most others) until we come up
against the limits of our own language, and defeated and
fatigued realize we never thought the questions through. i
nominate this guy:

http://hyperlogic.blogspot.com/

at a minimum wittgenstein's Brown Book should be required
reading for all AGI list members


Read it.  Along with pretty much everything else he wrote (that is
in print, anyhow).

Calling things a category error is a bit of a cop out.




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



*agi* | Archives https://www.listbox.com/member/archive/303/=now 
https://www.listbox.com/member/archive/rss/303/ | Modify 
https://www.listbox.com/member/?; 
Your Subscription	[Powered by Listbox] http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-25 Thread Richard Loosemore

Ben Goertzel wrote:

http://www2.le.ac.uk/departments/engineering/extranet/research-groups/neuroengineering-lab/


There are always more papers that can be discussed.


OK, sure, but this is a more recent paper **by the same authors,
discussing the same data***
and more recent similar data.


But that does not change the fact that we provided arguments to back up our
claims, when we analyzed the original Quiroga et al paper, and all the
criticism directed against our paper on this list, in the last week or so,
has completely ignored the actual content of that argument.


My question is how your arguments apply to their more recent paper
discussing the same data

It seems to me that their original paper was somewhat sloppy in the
theoretical discussion accompanying the impressive data, and you
largely correctly picked on their sloppy theoretical discussion ...
and now, their more recent works have cleaned up much of the
sloppiness of their earlier theoretical discussions.

Do you disagree with this?


Nope, don't disagree:  I just haven't had time to look at their paper yet.



It's not very interesting to me to dissect the sloppy theoretical
discussion at the end of an experimental paper from a few years ago.
What is more interesting to me is whether the core ideas underlying
the researchers' work are somehow flawed.  If their earlier discussion
was sloppy and was pushed back on by their peers, leading to a clearer
theoretical discussion in their current papers, then that means that
the scientific community is basically doing what it's supposed to
do


That is fine.

But when evaluating our particular critique, it is only fair to keep it 
in its proper context.  We set out to pick a collection of the most 
widely publicized neuroscience papers, to see how they looked from the 
point of view of a sophisticated understanding of cognitive science.


Our conclusion was that, TAKEN AS A WHOLE, this set of representative 
papers were interpreting their results in ways that not very coherent. 
Rather than advancing the cause of cognitive science, they were turning 
the clock back to an era when we knew very little about what might be 
going on.


If Quiroga et al do a better job now, then that is all to the good.  But 
Harley and I had a broader perspective, and we feel that the overall 
standards are pretty low.






Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] who is going to build the wittgenstein-ian AI filter to spot all the intellectual nonsense

2008-11-25 Thread Richard Loosemore

Tudor Boloni wrote:
Richard, please give me a link to the paper or at least the example 
related to manipulation of subjective experience in others, i am indeed 
curious to see how their approach would fare... thanks for the effort in 
advance


Sure thing:

http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-24 Thread Richard Loosemore

Ben Goertzel wrote:

Hi,

BTW, I just read this paper



For example, in Loosemore  Harley (in press) you can find an analysis of a
paper by Quiroga, Reddy, Kreiman, Koch, and Fried (2005) in which the latter
try to claim they have evidence in favor of grandmother neurons (or sparse
collections of grandmother neurons) and against the idea of distributed
representations.


which I found at

 http://www.vis.caltech.edu/~rodri/

and I strongly disagree that


We showed their conclusion to be incoherent.  It was deeply implausible,
given the empirical data they reported.



The claim that Harley and I made - which you quote above - was the 
*conclusion* sentence that summarized a detailed explanation of our 
reasoning.


That reasoning was in our original paper, and I also went to the trouble 
of providing a longer version of it in one of my last posts on this 
thread.  I showed, in that argument, that their claims about sparse vs 
distributed representations were incoherent, because they had not 
thought through the implications contained in their own words - part of 
which you quote below.


Merely quoting their words again, without resolving the inconsistencies 
that we pointed out, proves nothing.


We analyzed that paper because it was one of several that engendered a 
huge amount of publicity.  All of that publicity - which, as far as we 
can see, the authors did not have any problem with - had to do with the 
claims about grandmother cells, sparseness and distributed 
representations.  Nobody - not I, not Harley, and nobody else as far as 
I know - disputes that the empirical data were interesting, but that is 
not the point:  we attacked their paper because of their conclusion 
about the theoretical issue of sparse vs distributed representations, 
and the wider issue about grandmother cells.  In that context, it is not 
true that, as you put it below, the authors only [claimed] to have 
gathered some information on empirical constraints on how neural 
knowledge representation may operate.  They went beyond just claiming 
that they had gathered some relevant data:  they tried to say what that 
data implied.




Richard Loosemore








Their conclusion, to quote them, is that


How neurons encode different percepts is one of the most intriguing
questions in neuroscience. Two extreme hypotheses are
schemes based on the explicit representations by highly selective
(cardinal, gnostic or grandmother) neurons and schemes that rely on
an implicit representation over a very broad and distributed population
of neurons1–4,6. In the latter case, recognition would require the
simultaneous activation of a large number of cells and therefore we
would expect each cell to respond to many pictures with similar basic
features. This is in contrast to the sparse firing we observe, because
most MTL cells do not respond to the great majority of images seen
by the patient. Furthermore, cells signal a particular individual or
object in an explicit manner27, in the sense that the presence of the
individual can, in principle, be reliably decoded from a very small
number of neurons.We do not mean to imply the existence of single
neurons coding uniquely for discrete percepts for several reasons:
first, some of these units responded to pictures of more than one
individual or object; second, given the limited duration of our
recording sessions, we can only explore a tiny portion of stimulus
space; and third, the fact that we can discover in this short time some
images—such as photographs of Jennifer Aniston—that drive the
cells suggests that each cell might represent more than one class of
images. Yet, this subset of MTL cells is selectively activated by
different views of individuals, landmarks, animals or objects. This
is quite distinct from a completely distributed population code and
suggests a sparse, explicit and invariant encoding of visual percepts in
MTL.


The only thing that bothers me about the paper is that the title


Invariant visual representation by single neurons in
the human brain


does not actually reflect the conclusions drawn.  A title like


Invariant visual representation by sparse neuronal population encodings
the human brain


would have reflected their actual conclusions a lot better.  But the paper's
conclusion clearly says


We do not mean to imply the existence of single
neurons coding uniquely for discrete percepts for several reasons:


I see some incoherence between the title and the paper's contents,
which is a bit frustrating, but no incoherence in the paper's conclusion,
nor between the data and the conclusion.

According to what the paper says, the authors do not claim to have
solve the neural knowledge representation problem, but only to have
gathered some information on empirical constraints on how neural
knowledge representation may operate.

-- Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https

Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Richard Loosemore

Steve Richfield wrote:

Richard,

On 11/20/08, *Richard Loosemore* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Steve Richfield wrote:

Richard,
 Broad agreement, with one comment from the end of your posting...
 On 11/20/08, *Richard Loosemore* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:

   Another, closely related thing that they do is talk about low
level
   issues witout realizing just how disconnected those are from
where
   the real story (probably) lies.  Thus, Mohdra emphasizes the
   importance of spike timing as opposed to average firing rate.

 There are plenty of experiments that show that consecutive
closely-spaced pulses result when something goes off scale,
probably the equivalent to computing Bayesian probabilities 
100%, somewhat akin to the overflow light on early analog
computers. These closely-spaced pulses have a MUCH larger
post-synaptic effect than the same number of regularly spaced
pulses. However, as far as I know, this only occurs during
anomalous situations - maybe when something really new happens,
that might trigger learning?
 IMHO, it is simply not possible to play this game without
having a close friend with years of experience poking mammalian
neurons. This stuff is simply NOT in the literature.

   He may well be right that the pattern or the timing is more
   important, but IMO he is doing the equivalent of saying
Let's talk
   about the best way to design an algorithm to control an airport.
First problem to solve:  should we use Emitter-Coupled Logic
in the
   transistors that are in oour computers that will be running the
   algorithms.

 Still, even with my above comments, you conclusion is still
correct.


The main problem is that if you interpret spike timing to be playing
the role that you (and they) imply above, then you are commiting
yourself to a whole raft of assumptions about how knowledge is
generally represented and processed.  However, there are *huge*
problems with that set of implicit assumptions  not to put too
fine a point on it, those implicit assumptions are equivalent to the
worst, most backward kind of cognitive theory imaginable.  A theory
that is 30 or 40 years out of date.

 
OK, so how else do you explain that in fairly well understood situations 
like stretch receptors, that the rate indicates the stretch UNLESS you 
exceed the mechanical limit of the associated joint, whereupon you start 
getting pulse doublets, triplets, etc. Further, these pulse groups have 
a HUGE effect on post synaptic neurons. What does your cognitive science 
tell you about THAT?


See my parallel reply to Ben's point:  I was talking about the fact that 
neuroscientists make these claims about high level cognition;  I was not 
referring to the cases where they try to explain low-level, sensory and 
motor periphery functions like stretch receptor neurons.


So, to clarify:  yes, it is perfectly true that the very low level 
perceptual and motor systems use simple coding techniques.  We have 
known for decades (since Hubel and Weisel) that retinal ganglion cells 
use simple coding schemes, etc etc.


But the issue I was discussing was about the times when neuroscientists 
make statements about high level concepts and the processing of those 
concepts.  Many decades ago people suggested that perhaps these concepts 
were represented by single neurons, but that idea was shot down very 
quickly, and over the years we have found such sophisticated information 
processing effects occurring in cognition that it is very difficult to 
see how single neurons (or multiple redundant sets of neurons) could 
carry out those functions.


This idea is so discredited that it is hard to find references on the 
subject:  it has been accepted for so long that it is common knowledge 
in the cognitive science community.




 


The gung-ho neuroscientists seem blissfully unaware of this fact
because  they do not know enough cognitive science. 

 
I stated a Ben's List challenge a while back that you apparently missed, 
so here it is again.
 
*You can ONLY learn how a system works by observation, to the extent 
that its operation is imperfect. Where it is perfect, it represents a 
solution to the environment in which it operates, and as such, could be 
built in countless different ways so long as it operates perfectly. 
Hence, computational delays, etc., are fair game, but observed cognition 
and behavior are NOT except to the extent that perfect cognition and 
behavior can be described, whereupon the difference between observed and 
theoretical contains the information about construction.*
** 
*A perfect example

Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Richard Loosemore

Ben Goertzel wrote:

Richard,

My point was that there are essentially no neuroscientists out there
who believe that concepts are represented by single neurons.  So you
are in vehement agreement with the neuroscience community on this
point.

The idea that concepts may be represented by cell assemblies, or
attractors within cell assemblies, are more prevalent.  I assume
you're familiar with the thinking/writing of for instance Walter
Freeman and Susan Greenfield on these issues.   You may consider them
wrong, but they are not wrong due to obvious errors or due to
obliviousness to cog sci data.


So let me see if I've got this straight:  you are saying that there are 
essentially no neuroscientists who talk about spiking patterns in single 
neurons encoding relationships between concepts?


Not low-level features, as we discussed before, but medium- to 
high-level concepts?


You are saying that when they talk about the spike trains encoding 
bayesian contingencies, they NEVER mean, or imply, contingencies between 
concepts?




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Richard Loosemore

Vladimir Nesov wrote:

On Fri, Nov 21, 2008 at 8:09 PM, Richard Loosemore [EMAIL PROTECTED] wrote:

Ben Goertzel wrote:

Richard,

My point was that there are essentially no neuroscientists out there
who believe that concepts are represented by single neurons.  So you
are in vehement agreement with the neuroscience community on this
point.

The idea that concepts may be represented by cell assemblies, or
attractors within cell assemblies, are more prevalent.  I assume
you're familiar with the thinking/writing of for instance Walter
Freeman and Susan Greenfield on these issues.   You may consider them
wrong, but they are not wrong due to obvious errors or due to
obliviousness to cog sci data.

So let me see if I've got this straight:  you are saying that there are
essentially no neuroscientists who talk about spiking patterns in single
neurons encoding relationships between concepts?

Not low-level features, as we discussed before, but medium- to high-level
concepts?

You are saying that when they talk about the spike trains encoding bayesian
contingencies, they NEVER mean, or imply, contingencies between concepts?



What's a concept in this context, Richard? For example, place cells
activate on place fields, pretty palpable correlates, one could say
they represent concepts (and it's not a perceptual correlate). There
are relations between these concepts, prediction of their activity,
encoding of their sequences that plays role in episodic memory, and so
on. At the same time, the process by which they are computed is
largely unknown, individual cells perform some kind of transformation
on other cells, but how much of the concept is encoded in cells
themselves rather than in cells they receive input from is also
unknown. Since they jump on all kinds of contextual cues, it's likely
that their activity to some extent depends on activity in most of the
brain, but it doesn't invalidate analysis considering individual cells
or small areas of cortex, just as gravitation pull from the Mars
doesn't invalidate approximate calculations made on Earth according to
Newton's laws. I don't quite see what you are criticizing, apart from
specific examples of apparent confusion.


No, object-concepts and the like.  Not place, motion or action 'concepts'.

For example, Quiroga et al showed their subjects pictures of famous 
places and people, then made assertions about how those things were 
represented.




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Richard Loosemore

Vladimir Nesov wrote:

On Fri, Nov 21, 2008 at 8:34 PM, Richard Loosemore [EMAIL PROTECTED] wrote:

No, object-concepts and the like.  Not place, motion or action 'concepts'.

For example, Quiroga et al showed their subjects pictures of famous places
and people, then made assertions about how those things were represented.



Now that I have a bit better understanding of neuroscience than a year
ago, I reread relevant part of your paper and skimmed the Quiroga et
al's paper (Invariant visual representation by single neurons in the
human brain, for those who don't want to look it up in Richard's
paper). I don't see a significant disagreement. They didn't mean to
imply obviously wrong assertion that there are only few cells
corresponding to each high-level concept (to quote: the fact that we
can discover in this short time some images -- such as photographs of
Jennifer Aniston -- that drive the cells, suggests that each cell
might represent more than one class of images). Sparse and
distributed representations are mentioned as extreme perspectives, not
a dichtomy. Results certainly have some properties of sparse
representation, as opposed to extremely distributed, which doesn't
mean that results imply extremely sparse representation. Observed
cells as correlates of high-level concepts were surprisingly invariant
to the form in which that high-level concept was presented, which does
suggest that representation is much more explicit than in the
extremely distributed case. Or course, it's not completely explicit.

So, at this point I see at least this item in your paper as a strawman
objection (given that I didn't revisit other items).



Not correct.  We covered all the possible interpretations of what they 
said.  All you have done above is to quote back their words, without 
taking into account the fact that we thought through the implications of 
what they said, and pointed out that those implications did not make any 
sense.


They want some kind of mixture of sparse and multiply redundant and 
not distributed.  The whole point of what we wrote was that there is 
no consistent interpretation of what they tried to give as their 
conclusion.  If you think there is, bring it out and put it side by side 
with what we said.


But please, it doesn't help to just repeat back what they said, and 
declare that Harley and I were wrong.




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Richard Loosemore

Ben Goertzel wrote:

I saw the  main point of Richard's paper as being that the available
neuroscience data drastically underdetermines the nature of neural
knowledge representation ... so that drawing conclusions about neural
KR from available data involves loads of theoretical presuppositions
...

However, my view is that this is well known among neuroscientists, and
your reading of the Quiroga et al paper supports this...


You have still not answered my previous question about your claim that 
there are essentially no neuroscientists who say that spiking patterns 
in single neurons encode relationships between concepts.


And yet now you make another assertion about something that you think is 
well known among neuroscientists, while completely ignoring the actual 
argument that Harley and I brought to bear on this issue.




Richard Loosemore





ben g

On Fri, Nov 21, 2008 at 1:33 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:

On Fri, Nov 21, 2008 at 8:34 PM, Richard Loosemore [EMAIL PROTECTED] wrote:

No, object-concepts and the like.  Not place, motion or action 'concepts'.

For example, Quiroga et al showed their subjects pictures of famous places
and people, then made assertions about how those things were represented.


Now that I have a bit better understanding of neuroscience than a year
ago, I reread relevant part of your paper and skimmed the Quiroga et
al's paper (Invariant visual representation by single neurons in the
human brain, for those who don't want to look it up in Richard's
paper). I don't see a significant disagreement. They didn't mean to
imply obviously wrong assertion that there are only few cells
corresponding to each high-level concept (to quote: the fact that we
can discover in this short time some images -- such as photographs of
Jennifer Aniston -- that drive the cells, suggests that each cell
might represent more than one class of images). Sparse and
distributed representations are mentioned as extreme perspectives, not
a dichtomy. Results certainly have some properties of sparse
representation, as opposed to extremely distributed, which doesn't
mean that results imply extremely sparse representation. Observed
cells as correlates of high-level concepts were surprisingly invariant
to the form in which that high-level concept was presented, which does
suggest that representation is much more explicit than in the
extremely distributed case. Or course, it's not completely explicit.

So, at this point I see at least this item in your paper as a strawman
objection (given that I didn't revisit other items).

--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com









---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Richard Loosemore

Ben Goertzel wrote:

On Fri, Nov 21, 2008 at 4:44 PM, Richard Loosemore [EMAIL PROTECTED] wrote:

Ben Goertzel wrote:

I saw the  main point of Richard's paper as being that the available
neuroscience data drastically underdetermines the nature of neural
knowledge representation ... so that drawing conclusions about neural
KR from available data involves loads of theoretical presuppositions
...

However, my view is that this is well known among neuroscientists, and
your reading of the Quiroga et al paper supports this...

You have still not answered my previous question about your claim that there
are essentially no neuroscientists who say that spiking patterns in single
neurons encode relationships between concepts.



I did reply to that email


Uh, that is not the case, as far as I can see.

Maybe you better check your email stream:  I can see no reply to it here.







And yet now you make another assertion about something that you think is
well known among neuroscientists, while completely ignoring the actual
argument that Harley and I brought to bear on this issue.


I read that paper a year or two ago, I don't remember the details and don't
feel like looking them up right now, sorry... I was admittedly replying based on
a semi-dim recollection...

My recollection is that you were arguing various neuroscientists were
overinterpreting their data, and drawing cognitive conclusions from fMRI
and other data that were not really warranted by the data without loads of
other theoretical assumptions.  Sorry if this was the wrong take-away point,
but that's what I remember from it ;-)

ben


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Richard Loosemore

Vladimir Nesov wrote:

On Sat, Nov 22, 2008 at 12:30 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

They want some kind of mixture of sparse and multiply redundant and not
distributed.  The whole point of what we wrote was that there is no
consistent interpretation of what they tried to give as their conclusion.
 If you think there is, bring it out and put it side by side with what we
said.



There is always a consistent interpretation that drops their
interpretation altogether and leaves the data. I don't see their
interpretation as strongly asserting anything. They are just saying
the same thing in a different language you don't like or consider
meaningless, but it's a question of definitions and style, not
essence, as long as the audience of the paper doesn't get confused.



Let me spell it out carefully.

If we try to buy their suggestion that the MTL represents concepts (such 
as Jennifer Aniston) in a sparse manner, then this means that a 
fraction S of the neurons in MTL encode Jennifer Aniston, and the 
fraction is small.


Now, if the fraction S is small, then the probability of Quiroga et al 
hitting some neuron inthe set, using a random probe, is also small.


Agreed?

Clearly, as Quiroga et al point out themselves, if the probability S is 
very small, we should be surprised if that random probe actually did 
find a Jennifer Aniston cell.


So...

To make the argument work, they have to suggest that the number of 
Jennifer Aniston cells is actually a very significant percentage of the 
total number of cells.  In other words, sparse must mean about one in 
every hundred cells, or something like that (it's late, and I am tired, 
so I am not about to do the math, but if Quiroga et al do about a 
hundred probes and *one* of those is a JA cell, it clearly cannot be one 
in a million cells).


Agreed?

But, of that is the case, then each cell must be encoding many concepts, 
because otherwise there would not be anough cells to encode more than 
about a hundred concepts, would there?  They admit this in the paper: 
each cell might represent more than one class of images.  But there 
are perhaps hundreds of thousands of different images that a given 
person can recognize, so in that case, each neuron must be representing 
(of the order of) thousands of images.


The points that Harley and I made were:

1) In what sense is the representation sparse and not distributed if 
each neuron encodes thousands of images?  Roughly one percent of the 
neurons in the MTL are used for each concept, and each neuron represents 
thousands of other concepts:  this is just as accurate a description of 
a distributed representation, and it is a long way from anything that 
resembles a grandmother cell situation.


And yet, Quiroga et al give their paper the title Invariant visual 
representation by single neurons in the human brain.  They say SINGLE 
neurons, when what is implied is that 1% of the entire MTL (or roughly 
that number) is dedicated to representing a concept like Jennifer 
Aniston.  They seem to want to have their cake and eat it too:  they put 
single neurons in the title, but buried in their logic is the 
implication that vast numbers of neurons are redundantly coding for each 
concept.  That is an *incoherent* claim.


2) This entire discussion of the contrast between sparse and distributed 
representations has about it the implication that neurons are a unit 
that has some functional meaning, when talking about concepts.  But 
Harley and I described an example of a different (mor sophisticated) way 
to encode concepts, in which it made no sense to talk about these 
particular neurons as encoding particular concepts.  The neurons were 
just playing the role of dumb constituents in a larger structure, while 
the actual concepts were (in essence) patterns of activation that were 
just passing through.


This alternate conception of what might be going on leads us to the 
conclusion that the distinction Quiroga et al make between sparse and 
distributed is not necessarily meaningful at all.  In our alternate 
conception, the distinction is meaningless, and the conclusion that 
Quiroga et al draw (that there is an invariant, sparse and explicit 
code) is not valid - it is only a coherent conclusion if we buy the 
idea that individual neurons are doing some representing of concepts.


In other words, the conclusion was incoherent in this sense also.  It 
was theory laden.




The whole mess is summed up quite well by a statement that they make:


In the ... case [of distributed representation], recognition would 
require the simultaneous activation of a large number of cells and 
therefore we would expect each cell to respond to many pictures with 
similar basic features.  This is in contrast to the sparse firing we 
observe, because most MTL cells do not respond to the great majority of 
images seen by the patient.



But the only way to make their 'sparse interpretation work would be to 
have (about) 1

Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Richard Loosemore

Ben Goertzel wrote:

I don't think Qiroga et al's statements are contradictory, just
irritatingly vague...

I agree w Richard that the distributed vs sparse dichotomy is poorly
framed and in large part a bogus dichotomy

I feel the same way about the symbolic vs subsymbolic dichotomy...

Many of the conceptual distinctions at the heart of standard cognitive
science theory are very poorly defined, it's disappointing...


Well, we agree on that much then. ;-)


All I can say is that I am working my way through the entire corpus of 
knowledge in cog sci, attempting to unify it in such a way that it 
really does all hang together, and become well defined enough to be both 
testable and buildable as a complete AGI.


The paper I wrote with Harley, and the more recent one on consciousness, 
were just a couple of opening salvos in that effort.






Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Richard Loosemore

BillK wrote:

Nobody has mentioned this yet.

http://www.physorg.com/news146319784.html


I got a draft version of the paper earlier this year, and after a quick 
scan I filed it under 'junk'.


I just read it through again, and the filing stays the same.

His basic premise is that connectionists argued from the very beginning 
that they wanted to do things in a way that did not involve a central 
executive.  They wanted to see how much could be done by having large 
numbers of autonomous units do things independently.  Turns out, quite a 
lot can be achieved that way.


But it seems that Asim Roy has fundamentally misunderstood the force and 
the intent of that initial declaration by the connectionists.  There was 
a reason they said what they said:  they wanted to get away from the old 
symbol processing paradigm in which one thing happened at a time and 
symbols were separated from the mechanisms that modified or used 
symbols.  The connectionists were not being dogmatic about No 
Controllers!, they just wanted to stop all power being vested in the 
hands of a central executive ... and their motivation was from cognitive 
science, not engineering or control theory.


Roy seems to be completely obsessed with the idea that they are wrong, 
while at the same time not really understanding why they said it, and 
not really having a concrete proposal (or account of empirical data) to 
substitute for the connectionist ideas.


To tell the truth, I don't think there are many connectionists who are 
so hell-bent on the idea of not having a central controller, that they 
would not be open to an architecture that did have one (or several). 
They just don't think it would be good to have central controllers in 
charge of ALL the heavy lifting.


Roy's paper has the additional disadvantage of being utterly filled with 
underlines and boldface.  He shouts.  Not good in something that is 
supposed to be a scientific paper.


Sorry, but this is just junk.




Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Richard Loosemore

Pei Wang wrote:

Derek,

I have no doubt that their proposal contains interesting ideas and
will produce interesting and valuable results --- most AI projects do,
though the results and the values are often not what they targeted (or
they claimed to be targeting) initially.

Biologically inspired approaches are attractive, partly because they
have existing proof for the mechanism to work. However, we need to
remember that inspired by a working solution is one thing, and to
treat that solution as the best way to achieve a goal is another.
Furthermore, the difficult part in these approaches is to separate the
aspect of the biological mechanism/process that should be duplicated
from the aspects that shouldn't.


I share your concerns about this project, although I might have a 
slightly different set of reasons for being doubtful.


I watched part of one of the workshops that Mohdra chaired, on Cognitive 
Computing, and it gave me the same feeling that neuroscience gatherings 
always give me:  a lot of talk about neural hardware, punctuated by 
sudden, out-of-the-blue statements about cognitive ideas that seem 
completely unrelated to the ocean of neural talk that comes before and 
after.


There is a *depresssingly* long history of people doing this - and not 
just in neuroscience, but in many branches of engineering, in physics, 
in computer science, etc.  There are people out there who know that the 
mind is the new frontier, and they want to be in the party.  They also 
know that the cognitive scientists (in the broad sense) are probably the 
folks who are at the center of the party (in the sense of having most 
comprehensive knowledge).  So these people do what they do best, but add 
in a sprinkling of technical terms and (to be fair) some actual 
knowledge of some chunks of cognitive science.


Problem is, that to a cognitive scientist what they are doing is 
amateurish.


Another, closely related thing that they do is talk about low level 
issues witout realizing just how disconnected those are from where the 
real story (probably) lies.  Thus, Mohdra emphasizes the importance of 
spike timing as opposed to average firing rate.  He may well be right 
that the pattern or the timing is more important, but IMO he is doing 
the equivalent of saying Let's talk about the best way to design an 
algorithm to control an airport.  First problem to solve:  should we use 
Emitter-Coupled Logic in the transistors that are in oour computers that 
will be running the algorithms.


-|




Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-20 Thread Richard Loosemore
, but I and lot of other people can --- including 
brain surgeons who actually prove the value of their understandings with 
successful operations that are probably performed many times a day 
around the world.


This has got nothing whatever to do with the question under discussion.






 


### My prior post 


 P.S. /(With regard to the alleged bottoming out reported in your papert:



 as I have pointed out in previous threads, even the lowest level nodes



 in any system would normally have associations that would give them a



 type and degree of grounding and, thus, further meaning  So that



 spreading activation would normally not bottom out when it reaches the



 lowest level nodes.  But it would be subject to circularly, or a lack of



 information about lowest nodes other than what could be learned from



 their associations with other nodes in the system.)/






### Richard said 

The spreading activation you are talking about is not the same as the

operation of the analysis mechanism.  You are talking about things other

than the analysis mechanism that I have posited.  Hence not relevant.

 


## My response ##

Richard, you above said that

 


“Concepts are judged real by the system to the extent that they play a

very strongly anchored and consistent role in the foreground.  Color

concepts are anchored more strongly than any other, hence they are very

real.”

 

This means that you, yourself, consider low level colors inputs to have 
a type of grounding in the foreground of your molecular framework, which 
is one aspect of what I was talking about and what you are denying 
immediately above.  The very sense of realness which you claim the 
system associates with these colors input nodes is part of the total 
analysis the system makes of them.  That colors play a major role in 
visual sensation, and the nature of the role that they play in that 
sensation, such as that it appears to fill areas in a roughly 2D 
perception of the world, are analyzable attributes of them. 

 

You are free to define “analysis” narrowly to avoid my argument, but 
that would serves no purpose other than trying to protect your vanity.



Okay, this is where the discussion ends, at the words your vanity.

You have no idea that the definition of analysis mechanism I use MUST 
be defined that narrowly if it is to address the Hard problem.


Furthermore, you have now done what you always do at this point in my 
attempts to respond to your points:  you start making comments about me 
personally.  Thus:  but that would serves no purpose other than trying 
to protect your vanity.




I now regret wasting so much time attempting to respond to what seemed 
to be a politely worded set of questions about the paper.  I noticed 
your post because Ben quoted it, and I noticed that it was not abusive. 
 So I made the mistake of engaging you again.


In all of the above discussion I find myself trying to explain that you 
must not confuse the hard problem and the non-hard problems of 
consciousness, because the non-hard problems have nothing whatever to do 
with the argument.  I have now done this - what? - a dozen times at 
least.  I have been doing this from the beginning, but instead of 
listening to my repeated attempts to get you to understand the 
distinction, you only repeat the same mistake over and over again.




This is what I get for trying to engage in debate with someone who picks 
up a technical distinction (e.g. Hard/Non-Hard problem of consciousness) 
from Wikipedia, and then, a couple of days later, misapplies the concept 
left right and center.



Sorry:  I did make one last effort, but there is a limit to how many 
times I can say the same thing and be ignored every time.





Richard Loosemore




If one is talking about the sense of experience and mental associations 
a normal human mind associates with the color red, one is talking about 
a complex of activations that involve much more than just the activation 
of a single or even a contiguous group of lowest level color sensing 
nodes.  For example, the system has to have a higher level concept to 
let it know that a given color red in one part of the visual field is 
the same as the same color red in another part of the visual field.  
This is a higher level concept that is vital to any analysis of the 
meaning of the activation of a lowest level red receptor node.


 

So what you are rejecting as irrelevant are not only clearly relevant to 
your own argument, but they are relevant to any honest attempt to 
understand the subjective experience of the activation of the types of 
lower level nodes your paper places such an emphasis on.


 


Ed Porter

=

P.S. 

 

I have gotten sufficiently busy that I should not have taken the time to 
write this response, but because of the thoughtfulness of your below 
email, I felt obligated to respond.  Unfortunately if you

Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Richard Loosemore

Steve Richfield wrote:

Richard,
 
Broad agreement, with one comment from the end of your posting...
 
On 11/20/08, *Richard Loosemore* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Another, closely related thing that they do is talk about low level
issues witout realizing just how disconnected those are from where
the real story (probably) lies.  Thus, Mohdra emphasizes the
importance of spike timing as opposed to average firing rate.

 
There are plenty of experiments that show that consecutive 
closely-spaced pulses result when something goes off scale, probably 
the equivalent to computing Bayesian probabilities  100%, somewhat akin 
to the overflow light on early analog computers. These closely-spaced 
pulses have a MUCH larger post-synaptic effect than the same number of 
regularly spaced pulses. However, as far as I know, this only occurs 
during anomalous situations - maybe when something really new happens, 
that might trigger learning?
 
IMHO, it is simply not possible to play this game without having a close 
friend with years of experience poking mammalian neurons. This stuff is 
simply NOT in the literature.


He may well be right that the pattern or the timing is more
important, but IMO he is doing the equivalent of saying Let's talk
about the best way to design an algorithm to control an airport.
 First problem to solve:  should we use Emitter-Coupled Logic in the
transistors that are in oour computers that will be running the
algorithms.

 
Still, even with my above comments, you conclusion is still correct.


The main problem is that if you interpret spike timing to be playing 
the role that you (and they) imply above, then you are commiting 
yourself to a whole raft of assumptions about how knowledge is generally 
represented and processed.  However, there are *huge* problems with that 
set of implicit assumptions  not to put too fine a point on it, 
those implicit assumptions are equivalent to the worst, most backward 
kind of cognitive theory imaginable.  A theory that is 30 or 40 years 
out of date.


The gung-ho neuroscientists seem blissfully unaware of this fact because 
 they do not know enough cognitive science.




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Richard Loosemore

Ben Goertzel wrote:

Richard,


The main problem is that if you interpret spike timing to be playing the
role that you (and they) imply above, then you are commiting yourself to a
whole raft of assumptions about how knowledge is generally represented and
processed.  However, there are *huge* problems with that set of implicit
assumptions  not to put too fine a point on it, those implicit
assumptions are equivalent to the worst, most backward kind of cognitive
theory imaginable.  A theory that is 30 or 40 years out of date.

The gung-ho neuroscientists seem blissfully unaware of this fact because
 they do not know enough cognitive science.

Richard Loosemore



I don't think this is the reason.  There are plenty of neuroscientists
out there
who know plenty of cognitive science.

I think many neuroscientists just hold different theoretical
presuppositions than
you, for reasons other than ignorance of cog sci data.

Interdisciplinary cog sci has been around a long time now as you know ... it's
not as though cognitive neuroscientists are unaware of its data and ideas...


I disagree.

Trevor Harley wrote one very influential paper on the subject, and he 
and I wrote a second paper in which we took a random sampling of 
neuroscience papers and analyzed them carefully.  We found it trivially 
easy to gather data to illustrate our point.  And, no, even though I 
used my own framework as a point of reference, this was not crucial to 
the argument, merely a way of bringing the argument into sharp focus.


So I am basing my conclusion on gathering actual evidence and publishing 
a paper about it.


Since such luminaries as Jerry Fodor have said much the same thing, I 
think I stand in fairly solid company.





Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Richard Loosemore

Vladimir Nesov wrote:

On Fri, Nov 21, 2008 at 1:40 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

The main problem is that if you interpret spike timing to be playing the
role that you (and they) imply above, then you are commiting yourself to a
whole raft of assumptions about how knowledge is generally represented and
processed.  However, there are *huge* problems with that set of implicit
assumptions  not to put too fine a point on it, those implicit
assumptions are equivalent to the worst, most backward kind of cognitive
theory imaginable.  A theory that is 30 or 40 years out of date.



Could you give some references to be specific in what you mean?
Examples of what you consider outdated cognitive theory and better
cognitive theory.



Well, you could start with the question of what the neurons are supposed 
to represent, if the spikes are coding (e.g.) bayesian contingencies. 
Are the neurons the same as concepts/symbols?  Are groups of neurons 
redundantly coding for concepts/symbols?


One or other of these possibilties is usually assumed by default, but 
this leads to glaring inconsistencies in the interpretation of 
neuroscience data, as well as begging all of the old questions about how 
grandmother cells are supposed to do their job.  As I said above, 
cognitive scientists already came to the conclusion, 30 or 40 years ago, 
that it made no sense to stick to a simple identification of one neuron 
per concept.  And yet many neuroscientists are *implictly* resurrecting 
this broken idea, without addressing the faults that were previously 
found in it.  (In case you are not familiar with the faults, they 
include the vulnerability of neurons, the lack of connectivity between 
arbitrary neurons, the problem of assigning neurons to concepts, the 
encoding of variables, relationships and negative facts .. ).


For example, in Loosemore  Harley (in press) you can find an analysis 
of a paper by Quiroga, Reddy, Kreiman, Koch, and Fried (2005) in which 
the latter try to claim they have evidence in favor of grandmother 
neurons (or sparse collections of grandmother neurons) and against the 
idea of distributed representations.


We showed their conclusion to be incoherent.  It was deeply implausible, 
given the empirical data they reported.


Furthermore, we used my molecular framework (the same one that was 
outlined in the consciousness paper) to see how that would explain the 
same data.  It turns out that this much more sophisticated model was 
very consistent with the data (indeed, it is the only one I know of that 
can explain the results they got).


You can find our paper at www.susaro.com/publications.



Richard Loosemore


Loosemore, R.P.W.  Harley, T.A. (in press). Brains and Minds:  On the 
Usefulness of Localisation Data to Cognitive Psychology. In M. Bunzl  
S.J. Hanson (Eds.), Foundations of Functional Neuroimaging. Cambridge, 
MA: MIT Press.


Quiroga, R. Q., Reddy, L., Kreiman, G., Koch, C.  Fried, I. (2005). 
Invariant visual representation by single-neurons in the human brain. 
Nature, 435, 1102-1107.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Richard Loosemore

Trent Waddington wrote:

On Fri, Nov 21, 2008 at 11:02 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

Since such luminaries as Jerry Fodor have said much the same thing, I think
I stand in fairly solid company.


Wow, you said Fodor without being critical of his work.  Is that legal?

Trent


Arrrggghhh... you noticed!  :-(

I was hoping nobody would catch me out on that one.

Okay, so Fodor and I disagree about everything else.

But that's not the point :-).  He's a Heavy, so if he is on my side on 
this one issue, its okay to quote him.  (That's my story and I'm 
sticking to it.)






Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Richard Loosemore
.

Theory as Metaphor
This pattern of theorizing—first a candidate mechanism, then a rival 
mechanism that is noticeably different, then some experiments to tell us 
which is better—is the bread and butter of cognitive science. However, 
it is one thing to decide between two candidate mechanisms that are 
sketched in the vaguest of terms (with just enough specificity to allow 
the two candidates to be distinguished), and making a categorical 
statement about the precise nature of the mechanism. To be blunt, very 
few cognitive psychologists would intend the idea of packages drifting 
through a system and encountering places where there is only room for 
one, to be taken that literally.
On a scale from “metaphor” at one end to “mechanism blueprint” at the 
other, the idea of a bottleneck is surely nearer to the metaphor end. 
How many cognitive theorists would say that they are trying to pin down 
the mechanisms of cognition so precisely that every one of the 
subsidiary assumptions involved in a theory are supposed to be taken 
exactly as they come? In the case of the bottleneck theory, for 
instance, the task packages look suspiciously like symbols being 
processed by a symbol system, in old-fashioned symbolic-cognition style: 
but does that mean that connectionist implementations are being 
explicitly ruled out by the theory? Does the theory buy into all of the 
explicit representation issues involved in symbol processing, where the 
semantics of a task package is entirely contained within the package 
itself, rather than distributed in the surrounding machinery? These and 
many other questions are begged by the idea of task packages moving 
around a system and encountering a bottleneck, but would theorists who 
align themselves with the bottleneck theory want to say that all of 
these other aspects must be taken literally?
We think not. In fact, it seems more reasonable to suppose that the 
present state of cognitive psychology involves the search for 
metaphor-like ideas that are described as if they were true mechanisms, 
but which should not be taken literally by anyone, and especially not by 
anyone with a brain imaging device who wants to locate those mechanisms 
in the brain.


ENDQUOTE-



Richard Loosemore







On Fri, Nov 21, 2008 at 4:35 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

Vladimir Nesov wrote:

Could you give some references to be specific in what you mean?
Examples of what you consider outdated cognitive theory and better
cognitive theory.


Well, you could start with the question of what the neurons are supposed to
represent, if the spikes are coding (e.g.) bayesian contingencies. Are the
neurons the same as concepts/symbols?  Are groups of neurons redundantly
coding for concepts/symbols?

One or other of these possibilties is usually assumed by default, but this
leads to glaring inconsistencies in the interpretation of neuroscience data,
as well as begging all of the old questions about how grandmother cells
are supposed to do their job.  As I said above, cognitive scientists already
came to the conclusion, 30 or 40 years ago, that it made no sense to stick
to a simple identification of one neuron per concept.  And yet many
neuroscientists are *implictly* resurrecting this broken idea, without
addressing the faults that were previously found in it.  (In case you are
not familiar with the faults, they include the vulnerability of neurons, the
lack of connectivity between arbitrary neurons, the problem of assigning
neurons to concepts, the encoding of variables, relationships and negative
facts .. ).

For example, in Loosemore  Harley (in press) you can find an analysis of a
paper by Quiroga, Reddy, Kreiman, Koch, and Fried (2005) in which the latter
try to claim they have evidence in favor of grandmother neurons (or sparse
collections of grandmother neurons) and against the idea of distributed
representations.

We showed their conclusion to be incoherent.  It was deeply implausible,
given the empirical data they reported.

Furthermore, we used my molecular framework (the same one that was outlined
in the consciousness paper) to see how that would explain the same data.  It
turns out that this much more sophisticated model was very consistent with
the data (indeed, it is the only one I know of that can explain the results
they got).

You can find our paper at www.susaro.com/publications.



Richard Loosemore


Loosemore, R.P.W.  Harley, T.A. (in press). Brains and Minds:  On the
Usefulness of Localisation Data to Cognitive Psychology. In M. Bunzl  S.J.
Hanson (Eds.), Foundations of Functional Neuroimaging. Cambridge, MA: MIT
Press.

Quiroga, R. Q., Reddy, L., Kreiman, G., Koch, C.  Fried, I. (2005).
Invariant visual representation by single-neurons in the human brain.
Nature, 435, 1102-1107.









---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss

Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Richard Loosemore

Ben Goertzel wrote:

  The neuron = concept

'theory' is extremely broken:  it is so broken, that when neuroscientists
talk about bayesian contingencies being calculated or encoded by spike
timing mechanisms, that claim is incoherent.


This is not always true ... in some cases there are solidly demonstrated
connections between neurally computed bayesian contingencies and
observed perceptual and motor phenomena in organisms...

I agree that no one knows how abstract concepts are represented in the brain,
but for sensorimotor stuff it is not the case that work on bayesian population
coding in the brain is incoherent


No contest:  it is valid there.

But I am only referring to the cases where neuroscientists imply that 
what they are talking about are higher level concepts.


This happens extremely frequently.



Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-19 Thread Richard Loosemore
 that in that case the explanation is a failure of the 
analysis mechanism because it bottoms out.


However, just because I picked that example for the sake of clarity, 
that does not mean that the *only* place where the analysis mechanism 
can get into trouble must be just when it bumps into those peripheral 
atoms.  I tried to explain this in a previous reply to someone (perhaps 
it was you):  it would be entirely possible that higher level atoms 
could get built to represent [a sum of all the qualia-atoms that are 
part of one object], and if that happened we might find that this higher 
level atom was partly analyzable (it is composed of lower level qualia) 
and partly not (any analysis hits the brick wall after one successful 
unpacking step).


So when you raise the example of being conscious of your son, it can be 
partly a matter of the consciousness that comes from just consciousness 
of his parts.


But there are other things that could be at work in this case, too.  How 
much is that consciousness of a whole object an awareness of an 
internal visual image?  How much is it due to the fact that we can 
represent the concept of [myself having a concept of object x]  in 
which case the unanalyzability is deriving not from the large object, 
but from the fact that [self having a concept of...] is a representation 
of something your *self* is doing  and we know already that that is 
a bottoming-out concept.


Overall, you can see that there are multiple ways to get the analysis 
mechanism to bottom out, and it may be able to bottom out partially 
rather than completely.  Just because I used a prticular example of 
bottoming-out does not mean that I claimed this was the only way it 
could happen.


And, of course, all those other claims of conscious experiences are 
widely agreed to be more dilute (less mysterious) than such things as 
qualia.





Richard Loosemore


























---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-19 Thread Richard Loosemore
 at simply re-stating your position that you do not think the 
theory succeeds in explaining the subject, whereas I cannot bring you 
round to talking about what is the most important idea in the paper: 
that such simple statements as the ones you are making are just using a 
concept of explanation without examining it.


So we still have not addressed the content of part 2 of the paper.  I 
did try to say all of the above in the last post, but you didn't mention 
that bit in your reply ;-)






Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-19 Thread Richard Loosemore

Ben Goertzel wrote:


Richard,

So are you saying that: According to the ordinary scientific standards 
of 'explanation', the subjective experience of consciousness cannot be 
explained ... and as a consequence, the relationship between subjective 
consciousness and physical data (as required to be elucidated by any 
solution to Chalmers' hard problem as normally conceived) also cannot 
be explained.


If so, then: according to the ordinary scientific standards of 
explanation, you are not explaining consciousness, nor explaining the 
relation btw consciousness and the physical ... but are rather 
**explaining why, due to the particular nature of consciousness and its 
relationship to the ordinary scientific standards of explanation, this 
kind of explanation is not possible**


??


No!

If you write the above, then you are summarizing the question that I 
pose at the half-way point of the paper, just before the second part 
gets underway.


The ordinary scientific standards of explanation are undermined by 
questions about consciousness.  They break.  You cannot use them.  They 
become internally inconsistent.  You cannot say I hereby apply the 
standard mechanism of 'explanation' to Problem X, but then admit that 
Problem X IS the very mechanism that is responsible for determining the 
 'explanation' method you are using, AND the one thing you know about 
that mechanism is that you can see a gaping hole in the mechanism!


You have to find a way to mend that broken standard of explanation.

I do that in part 2.

So far we have not discussed the whole paper, only part 1.



Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-19 Thread Richard Loosemore
 of 
your paper.


 

In multiple prior posts on this thread I have said I believe the real 
source of consciousness appears to lie in such a molecular framework, 
but that to have anything approaching a human level of such 
consciousness this framework, and its computations that give rise to 
consciousness, have to be extremely complex.  I have also emphasized 
that brain scientist who have already done research on the neural 
correlates of consciousness, tend to indicate humans usually only report 
consciousness of things associated with fairly broad spread neural 
activation, which would normally involve many billions or trillions of 
inter-neuron messages per second.


The data produced by neuroscience, at this point, is extremely 
confusing.  It is also obscured by people who are themselves confused 
about the distinction between the Hard and Easy problems.  I do not 
believe you can deduce anything meaningful from the neural research yet. 
See Loosemore and Harley (forthcoming).






I have posited that widespread 
activation of the nodes directly and indirectly associated with a given 
“conscious” node, provides dynamic grounding for the meaning of the 
conscious node.


 

As I have pointed out, we know of nothing about physical reality that is 
anything other than computation (if you consider representation to be 
part of computation).  Similarly there is nothing our subjective 
experience can tell us about our own consciousnesses that is other than 
computation.  One of the key words we humans use to describe our 
consciousnesses is “awareness.”  Awareness is created by computation.  
It is my belief that this awareness comes from the complex, dynamically 
focused, and meaningful way in which our thought processes compute in 
interaction with themselves.


 


Ed Porter

 

P.S. /(With regard to the alleged bottoming out reported in your papert: 
as I have pointed out in previous threads, even the lowest level nodes 
in any system would normally have associations that would give them a 
type and degree of grounding and, thus, further meaning  So that 
spreading activation would normally not bottom out when it reaches the 
lowest level nodes.  But it would be subject to circularly, or a lack of 
information about lowest nodes other than what could be learned from 
their associations with other nodes in the system.)/




The spreading activation you are talking about is not the same as the 
operation of the analysis mechanism.  You are talking about things other 
than the analysis mechanism that I have posited.  Hence not relevant.




Regards




Richard Loosemore



 

 

 


-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 19, 2008 1:57 PM
To: agi@v2.listbox.com
Subject: Re: [agi] A paper that actually does solve the problem of 
consciousness


 


Ben Goertzel wrote:


 Richard,







 I re-read your paper and I'm afraid I really don't grok why you think it



 solves Chalmers' hard problem of consciousness...







 It really seems to me like what you're suggesting is a cognitive



 correlate of consciousness, to morph the common phrase neural



 correlate of consciousness ...







 You seem to be stating that when X is an unanalyzable, pure atomic



 sensation from the perspective of cognitive system C, then C will



 perceive X as a raw quale ... unanalyzable and not explicable by



 ordinary methods of explication, yet, still subjectively real...







 But, I don't see how the hypothesis







 Conscious experience is **identified with** unanalyzable mind-atoms







 could be distinguished empirically from







 Conscious experience is **correlated with** unanalyzable mind-atoms







 I think finding cognitive correlates of consciousness is interesting,



 but I don't think it constitutes solving the hard problem in Chalmers'



 sense...







 I grok that you're saying consciousness feels inexplicable because it



 has to do with atoms that the system can't explain, due to their role as



 its primitive atoms ... and this is a good idea, but, I don't see how



 it bridges the gap btw subjective experience and empirical data ..







 What it does is explain why, even if there *were* no hard problem,



 cognitive systems might feel like there is one, in regard to their



 unanalyzable atoms







 Another worry I have is: I feel like I can be conscious of my son, even



 though he is not an unanalyzable atom.  I feel like I can be conscious



 of the unique impression he makes ... in the same way that I'm conscious



 of redness ... and, yeah, I feel like I can't fully explain the



 conscious impression he makes on me, even though I can explain a lot of



 things about him...







 So I'm not convinced that atomic sensor input is the only source of raw,



 unanalyzable consciousness...


 


My first response to this is that you still don't seem to have taken

account of what was said in the second part

Re: [agi] Now hear this: Human qualia are generated in the human cranial CNS and no place else

2008-11-18 Thread Richard Loosemore

Colin Hales wrote:

Mike Tintner wrote:
Colin:Qualia generation has been highly localised into specific 
regions in *cranial *brain material already. Qualia are not in the 
periphery. Qualia are not in the spinal CNS, Qualia are not in the 
cranial periphery eg eyes or lips
 
Colin,
 
This is to a great extent nonsense. Which sensation/emotion - (qualia 
is a word strictly for philosophers not scientists, I suggest) - is 
not located in the body? When you are angry, you never frown or bite 
or tense your lips? The brain helps to generate the emotion - (and 
note helps). But emotions are bodily events - and *felt* bodily.
 
This whole discussion ignores the primary paradox about consciousness, 
(which is first and foremost sentience) :  *the brain doesn't feel a 
thing* - sentience/feeling is located in the body outside the brain. 
When a surgeon cuts your brain, you feel nothing. You feel and are 
conscious of your emotions in and with your whole body.
I am talking about the known, real actual origins of *all* phenomenal 
fields. This is anatomical/physiological fact for 150 years. You don't 
see with your eyes. You don't feel with your skin. Vision is in the 
occipital cortex. The eyes provide data. Skin provides the data, CNS 
somatosensory field delivers the experience of touch and projects it to 
the skin region. ALL perceptions, BAR NONE, including all emotions, 
imagination, everything - ALL of it is actually generated in cranial 
CNS.  Perceptual fields are projected from the CNS to appear AS-IF they 
originate in the periphery. The sensory measurements themselves convey 
no sensations at all. 

I could give you libraries of data. Ask all doctors. They specifically 
call NOCICEPTION the peripheral sensor and PAIN the CNS 
(basal...inferior colliculus or was it cingulate...can't remember 
exactly) percept. Pain in your back? NOPE. Pain is in the CNS and 
projected (Badly) to the location of your back, like a periscope-view. 
Pain in your gut? NOPE. You have nociceptors in the myenetric/submucosal 
plexuses that convey data to the CNS which generates PAIN and projects 
it at the gut. Feel sad? Your laterally offset amygdala create an 
omnidirectional percept centered on your medial cranium region. etc etc 
etc etc


YESBrains don't have their own sensors or self-represent with a 
perceptual field. So what? That's got nothing whatever to do with the 
matter at hand. CUT cortex and you can kill off what it is like 
percepts out there in the body (although in confusing ways). Touch 
appropriate exposed cortex with a non-invasive probe and you can create 
percepts apparently, but not actually, elsewhere in the body.


The entire neural correlates of consciousness (NCC) paradigm is 
dedicated to exploring CNS neurons for correlates of qualia. NOT 
peripheral neurons. Nobody anywhere else in the world thinks that 
sensation is generated in the periphery.


The *CNS* paints your world with qualia-paint in a projected picture 
constructed in the CNS using sensationless data from the periphery. 
Please internalise this brute fact. I didn't invent it or simply choose 
to believe it because it was convenient. I read the literature. It told 
me. It's there to be learned. Lots of people have been doing conclusive, 
real physiology for a very long time. Be empirically informed: Believe 
them. Or, if you are still convinced it's nonsense then tell them, not 
me.  They'd love to hear your evidence and you'll get a nobel prize for 
an amazing about-turn in medical knowledge. :-)


This has been known, apparently perhaps by everybody but computer 
scientists, for 150 years.Can I consider this a general broadcast once 
and for all? I don't ever want to have to pump this out again. Life is 
too short.


Yes, although it might be more accurate to say that this is the last 
known place where you can catch the sensory percepts as single, 
identifiable things  I don't think it would really be fair to say 
that this place is the origin of them.


So, for example:

 - If you cover a sheet of red paper you happen to be looking at, the 
red qualia disappear.


 - If instead you knock out the cones that pick up red light in the 
eye, then the red qualia disappear.


 - If you take out the ganglion cells attached to the red cones in the 
retina, the red qualia disappear.


 - If you keep doing this at any point between there and area 17 (the 
visual cortex), you can get the red qualia to disappear.


But after that, there is no single place you can cut off the percept 
with one single piece of intervention.




Richard Loosemore








---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Richard Loosemore

Steve Richfield wrote:

To all,
 
I am considering putting up a web site to filter the crazies as 
follows, and would appreciate all comments, suggestions, etc.
 
Everyone visiting the site would get different questions, in different 
orders, etc. Many questions would have more than one correct answer, and 
in many cases, some combinations of otherwise reasonable individual 
answers would fail. There would be optional tutorials for people who are 
not confident with the material. After successfully navigating the site, 
an applicant would submit their picture and signature, and we would then 
provide a license number. The applicant could then provide their name 
and number to 3rd parties to verify that the applicant is at least 
capable of rational thought. This information would look much like a 
driver's license, and could be printed out as needed by anyone who 
possessed a correct name and number.
 
The site would ask a variety of logical questions, most especially 
probing into:
1.  Their understanding of Reverse Reductio ad Absurdum methods of 
resolving otherwise intractable disputes.
2.  Whether they belong to or believe in any religion that supports 
various violent acts (with quotes from various religious texts). This 
would exclude pretty much every religion, as nearly all religions 
condone useless violence of various sorts, or the toleration or exposure 
of violence toward others. Even Buddhists resist MAD (Mutually Assured 
Destruction) while being unable to propose any potentially workable 
alternative to nuclear war. Jesus attacked the money changers with no 
hope of benefit for anyone. Mohammad killed the Jewish men of Medina and 
sold their women and children into slavery, etc., etc.
3.  A statement in their own words that they hereby disavow allegiance 
to any non-human god or alien entity, and that they will NOT follow the 
directives of any government led by people who would obviously fail this 
test. This statement would be included on the license.
 
This should force many people off of the fence, as they would have to 
choose between sanity and Heaven (or Hell).
 
Then, Ben, the CIA, diplomats, etc., could verify that they are dealing 
with people who don't have any of the common forms of societal insanity. 
Perhaps the site should be multi-lingual?
 
Any and all thoughts are GREATLY appreciated.
 
Thanks
 
Steve Richfield


I see how this would work:  crazy people never tell lies, so you'd be 
able to nail 'em when they gave the wrong answers.



8-|



Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-18 Thread Richard Loosemore

Harry Chesley wrote:

Richard Loosemore wrote:

Harry Chesley wrote:

Richard Loosemore wrote:

I completed the first draft of a technical paper on consciousness
the other day.   It is intended for the AGI-09 conference, and it
can be found at:

http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf


One other point: Although this is a possible explanation for our
subjective experience of qualia like red or soft, I don't see
it explaining pain or happy quite so easily. You can
hypothesize a sort of mechanism-level explanation of those by
relegating them to the older or lower parts of the brain (i.e.,
they're atomic at the conscious level, but have more effects at the
physiological level (like releasing chemicals into the system)),
but that doesn't satisfactorily cover the subjective side for me.

I do have a quick answer to that one.

Remember that the core of the model is the *scope* of the analysis
mechanism.  If there is a sharp boundary (as well there might be),
then this defines the point where the qualia kick in.  Pain receptors
are fairly easy:  they are primitive signal lines.  Emotions are, I
believe, caused by clusters of lower brain structures, so the
interface between lower brain and foreground is the place where
the foreground sees a limit to the analysis mechanisms.

More generally, the significance of the foreground is that it sets
a boundary on how far the analysis mechanisms can reach.

I am not sure why that would seem less satisfactory as an explanation
of the subjectivity.  It is a raw feel, and that is the key idea,
no?


My problem is if qualia are atomic, with no differentiable details, why
do some feel different than others -- shouldn't they all be separate
but equal? Red is relatively neutral, while searing hot is not. Part
of that is certainly lower brain function, below the level of
consciousness, but that doesn't explain to me why it feels
qualitatively different. If it was just something like increased
activity (franticness) in response to searing hot, then fine, that
could just be something like adrenaline being pumped into the system,
but there is a subjective feeling that goes beyond that.


There is more than one question wrapped up inside this question, I think.

First:  all qualia feel different, of course.  You seem to be pointing 
to a sense in which pain is more different than most  ?  But is 
that really a valid idea?


Does pain have differentiable details?  Well, there are different 
types of pain  but that is to be expected, like different colors. 
But that is arelatively trivial point.  Within one single pain there can 
be several *effects* of that pain, including some strange ones that do 
not have counterparts in the vision-color case.


For example, suppose that a searing hot pain caused a simultaneous 
triggering of the motivational system, forcing you to suddenly want to 
do something (like pulling your body part away from the pain).  The 
feeling of wanting (wanting to pull away) is a quale of its own, in a 
sense, so it would not be impossible for one quale (searing hot) to 
always be associated with another (wanting to pull away).  If those 
always occurred together, it might seem that there was structure to the 
pain experience, where in fact there is a pair of things happening.


It is probably more than a pair of things, but perhaps you get my drift.

Remember that having associations to a pain is not part of what we 
consider to be the essence of the subjective experience;  the bit that 
is most mysterious and needs to be explained.


Another thing we have to keep in mind here is that the exact details of 
how each subjective experience feels are certainly going to seem 
different, and some can seem like each other and not like others  
colors are like other colors, but not like pains.


That is to be expected:  we can say that colors happen in a certain 
place in our sensorium (vision) while pains are associated with the body 
(usually), but these differences are not inconsistent with the account I 
have given.  If concept-atoms encoding [red] always attach to all the 
othe concept-atoms involving visual experiences, that would make them 
very different than pains like [searing hot], but all of this could be 
true at the same time that [red] would do what it does to the analysis 
mechanism (when we try to think the thought Was is the essence of 
redness?).  So the problem with the analysis mechanism would happen 
with both pains and colors, even though the two different atom types 
played games with different sets of other concept-atoms.




Richard Loosemore





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore

John G. Rose wrote:

From: Richard Loosemore [mailto:[EMAIL PROTECTED]

Three things.


First, David Chalmers is considered one of the world's foremost
researchers in the consciousness field (he is certainly now the most
celebrated).  He has read the argument presented in my paper, and he
has
discussed it with me.  He understood all of it, and he does not share
any of your concerns, nor anything remotely like your concerns.  He had
one single reservation, on a technical point, but when I explained my
answer, he thought it interesting and novel, and possibly quite valid.

Second, the remainder of your comments below are not coherent enough to
be answerable, and it is not my job to walk you through the basics of
this field.

Third, about your digression:  gravity does not escape from black
holes, because gravity is just the curvature of spacetime.  The other
things that cannot escape from black holes are not forces.

I will not be replying to any further messages from you because you are
wasting my time.




I read this paper several times and still have trouble holding the model
that you describe in my head as it fades quickly and then there is a just a
memory of it (recursive ADD?). I'm not up on the latest consciousness
research but still somewhat understand what is going on there. Your paper is
a nice and terse description but to get others to understand the highlighted
entity that you are trying to describe may be easier done with more
diagrams. When I kind of got it for a second it did appear quantitative,
like mathematically describable. I find it hard to believe though that
others have not put it this way, I mean doesn't Hofstadter talk about this
in his books, in an unacademical fashion?



Hofstadter does talk about loopiness and recursion in ways that are 
similar, but the central idea is not the same.  FWIW I did have a brief 
discussion with him about this at the same conference where I talked to 
Chalmers, and he agreed that his latest ideas about consciousness and 
the one I was suggesting did not seem to overlap.





Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore

Ben Goertzel wrote:


Sorry to be negative, but no, my proposal is not in any way a
modernization of Peirce's metaphysical analysis of awareness.



Could you elaborate the difference?  It seems very similar to me.   
You're saying that consciousness has to do with the bottoming-out of 
mental hierarchies in raw percepts that are unanalyzable by the mind ... 
and Peirce's Firsts are precisely raw percepts that are unanalyzable by 
the mind...


It is partly the stance (I arrive at my position from a cognitivist 
point of view, with specific mechanisms that must be causing the 
problem), where Peirce appears to suggest the Firsts idea as a purely 
metaphysical proposal.


So, what I am saying is that this superficial resemblance between his 
position and mine is so superficial that it makes no sense to describe 
on the latter as a modernization of the former.


A good analogy would be Galilean Relativity and Einsten's Relativity. 
Although there is a superficial resemblance, nobody would really say 
that Einstein was just a modernization of Galileo.





***
The standard meaning of Hard Problem issues was described very well by 
Chalmers, and I am addressing the hard problem of concsciousness, not 
the other problems.

***

Hmmm  I don't really understand why you think your argument is a 
solution to the hard problem  It seems like you explicitly 
acknowledge in your paper that it's *not*, actually  It's more like 
a philosophical argument as to why the hard problem is unsolvable, IMO.


No, that is only part one of the paper, and as you pointed out before, 
the first part of the proposal ends with a question, not a statement 
that this was a failure to explain the problem.  That question was 
important.


The important part is the analysis of explanation and meaning.  This 
can also be taken to be about your use of the word unsolvable in the 
above sentence.


What I am claiming (and I will make this explicit in a revision of the 
paper) is that these notions of explanation, meaning, solution to 
the problem, etc., are pushed to their breaking point by the problem of 
consciousness.  So it is not that there is a problem with understanding 
consciousness itself, so much as there is a problem with what it means 
to *explain* things.


Other things are easy to explain, but when we ask for an explanation 
of something like consciousness, the actual notion of explanation 
breaks down in a drastic way.  This is very closely related to the idea 
of an objective observer in physics  in the quantum realm that 
notion breaks down.


What I gave in my paper was (a) a detailed description of how the 
confusion about consciousness arises [peculiar behavior of the analysis 
mechanism], but then (b) I went on to point out this peculiar behavior 
infects much more than just our ability to explain consciousness, 
because it casts doubt on the fundamental meaning of explanation and 
semantics and ontology.


The conclusion that I then tried to draw was that it would be wrong to 
say that consciousness was just an artifact or (ordinarily) inexplicable 
thing, because this would be to tacitly assume that the sense of 
explain that we are using in these statements is the same one we have 
always used.  Anyone who continued to use explain and mean (etc.) in 
their old context would be stuck in what I have called Level 0, and in 
that level the old meanings [sic] of those terms are just not able to 
address the issue of consciousness.


Go back to the quantum mechanics analogy again:  it is not right to 
cling to old ideas of position and momentum, etc., and say that we 
simply do not know the position of an electron.  The real truth - the 
new truth about how we should understand position and momentum - is 
that the position of the electron is fundamentally not even determined 
(without observation).


This analogy is not just an analogy, as I think you might begin to 
guess:  there is a deep relationship between these two domains, and I am 
still working on a way to link them.





Richard Loosemore.















---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Zombies, Autism and Consciousness {WAS Re: [agi] A paper that actually does solve the problem of consciousness]

2008-11-17 Thread Richard Loosemore

Trent Waddington wrote:

Richard,

  After reading your paper and contemplating the implications, I
believe you have done a good job at describing the intuitive notion of
consciousness that many lay-people use the word to refer to.  I
don't think your explanation is fleshed out enough for those
lay-people, but its certainly sufficient for most the people on this
list.  I would recommend that anyone who hasn't read the paper, and
has an interest in this whole consciousness business, give it a read.

I especially liked the bit where you describe how the model of self
can't be defined in terms of anything else.. as it is inherently
recursive.  I wonder whether the dynamic updating of the model of self
may well be exactly the subjective experience of consciousness that
people describe.  If so, the notion of a p-zombie is not impossible,
as you suggest in your conclusions, but simply an AGI without a
self-model.


This is something that does intrigue me (the different kinds of 
self-model that could be in there), but I come to slightly different 
conclusions.


I think someone (Putnam, IIRC) pointed out that you could still have 
consciousness without the equivalent of any references to self and 
others, because such a creature would still be experiencing qualia.


But, that aside, do you not think that a creature with absolutely no 
self model at all woudl have some troubles?  It woudl not be able to 
represent itself in the context of the world, so it would be purely 
reactive.  But wait:  come to think of it, could it actually control any 
limbs if it did not have some kind of model of itself?


Now, suppose you grant me that all AGIs would have at least some model 
of self (if only to control a single robot arm):  then, if the rest of 
the cognitive mechanism allows it to think in a powerful and recursive 
way about the contents of its own thought processes (which I have 
suggested is one of the main preconditions for being conscious, or even 
being AG-Intelligent), would it not be difficult to stop it from 
developing a more general model of itself than just the simple self 
model needed to control the robot arm?  We might find that any kind of 
self model would be a slippery slope toward a bigger self model.


Finally, consider the case of humans with severe Autism.  One suggestion 
is that they have a very poorly developed, or suppressed self model.  I 
would be *extremely* reluctant to think that these humans are p-zombies, 
just because of that.  I know that is a gut feeling, but even so.






Finally, the introduction says:

  Given  the  strength  of  feeling on  these matters - for  example,
 the widespread belief  that AGIs  would  be  dangerous  because,  as
conscious  beings, they would inevitably rebel against their lack of
freedom - it  is  incumbent upon  the AGI  community  to  resolve
these questions  as  soon  as  possible.

I was really looking forward to seeing you address this widespread
belief, but unfortunately you declined.  Seems a bit of a tease.

Trent


Oh, I apologize. :-(

I started out with the intention of squeezing into the paper a 
description of the concsiousness proposal PLUS my parallel proposal 
about AGI motivation and emotion.


It became obvious toward the end that I would not be able to say 
anything about the latter (I barely had enough room for a terse 
description of the former).  But then I explained instead that this was 
part of a larger research program to cover issues of motivation, emotion 
and friendliness.  I guess that wording did not really make up for the 
initial tease, so I'll try to rephrase that in the edited version


And I will also try to get the motivation and friendliness paper written 
asap, to complement this one.





Richard Loosemore









---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore

Benjamin Johnston wrote:


I completed the first draft of a technical paper on consciousness the 
other day.   It is intended for the AGI-09 conference, and it can be 
found at:



Hi Richard,

I don't have any comments yet about what you have written, because I'm 
not sure I fully understand what you're trying to say... I hope your 
answers to these questions will help clarify things.


It seems to me that your core argument goes something like this:

That there are many concepts for which an introspective analysis can 
only return the concept itself.

That this recursion blocks any possible explanation.
That consciousness is one of these concepts because self is inherently 
recursive.
Therefore, consciousness is explicitly blocked from having any kind of 
explanation.


Is this correct? If not, how have I misinterpreted you?



This is pretty much accurate, but only up to the end of the first phase 
of the paper, where I asked the question: Is explaining why we cannot 
explain something the same as explaining it?


The next phase is crucial, because (as I explained a little more in my 
parallel reply to Ben) the conclusion of part 1 is really that the whole 
notion of 'explanation' is stretched to breaking point by the concept of 
consciousness.


So in the end what I do is argue that the whole concept of explanation 
(and meaning, etc) has to be replaced in order to deal with 
consciousness.  Eventually I come to a rather strange-looking 
conclusion, which is that we are obliged to say that consciousness is 
a real thing like any other in the universe, but the exact content of it 
(the subjective core) is truly inexplicable.





I have a thought experiment that might help me understand your ideas:

If we have a robot designed according to your molecular model, and we 
then ask the robot what exactly is the nature of red or what is it 
like to experience the subjective essense of red, the robot may analyze 
this concept, ultimately bottoming out on an incoming signal line.


But what if this robot is intelligent and can study other robots? It 
might then examine other robots and see that when their analysis bottoms 
out on an incoming signal line, what actually happens is that the 
incoming signal line is activated by electromagnetic energy of a certain 
frequency, and that the object recognition routines identify patterns in 
signal lines and that when an object is identified it gets annotated 
with texture and color information from its sensations, and that a 
particular software module injects all that information into the 
foreground memory. It might conclude that the experience of 
experiencing red in the other robot is to have sensors inject atoms 
into foreground memory, and it could then explain how the current 
context of that robot's foreground memory interacts with the changing 
sensations (that have been injected into foreground memory) to make that 
experience 'meaningful' to the robot.


What if this robot then turns its inspection abilities onto itself? Can 
it therefore further analyze red? How does your theory interpret that 
situation?


-Ben


Ahh, but that *is* the way that my theory analyzes the situation, no? 
:-)  What I mean is, I would use a human (me) in place of the first robot.


Bear in mind that we must first separate out the hard problem (the 
pure subjective experience of red) from any easy problems (mere 
radiation sensititivity, etc).  From the point of view of that first 
robot, what will she get from studying the second robot (other robots in 
general), if the question she really wants to answer is What is the 
explanation for *my* subjective experience of redness?


She could talk all about the foreground and the way the analysis 
mechanism works in other robots (and humans), but the question is, what 
would that avail her is she wanted to answer the hard problem of where 
her subjective conscious experience comes from?


After reading the first part of my paper, she would say (I hope!):  Ah, 
now I see how all my questions about the subjective experience of things 
are actually caused by my analysis mechanism doing somethig weird.


But the (again, I hope) she would say:  H, does it meta-explain my 
subjective experiences if I know why I cannot explain these experiences?


And thence to part two of the paper




Richard Loosemore




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore

Colin Hales wrote:

Dear Richard,
I have an issue with the 'falsifiable predictions' being used as 
evidence of your theory.


The problem is that right or wrong...I have a working physical model for 
consciousness. Predictions 1-3 are something that my hardware can do 
easily. In fact that kind of experimentation is in my downstream 
implementation plan. These predictions have nothing whatsoever to do 
with your theory or mine or anyones. I'm not sure about prediction 4. 
It's not something I have thought about, so I'll leave it aside for now. 
In my case, in the second stage of testing of my chips, one of the 
things I want to do is literally 'Mind Meld', forming a bridge of 4 sets 
of compared, independently generated qualia. Ultimately the chips may be 
implantable, which means a human could experience what they generate in 
the first person...but I digress


Your statement This theory of consciousness can be used to make some 
falsifiable predictions could be replaced by ANY theory of 
consciousness can be used to make falsifiable predictions 1..4 as 
follows.. Which basically says they are not predictions that falsify 
anything at all. In which case the predictions cannot be claimed to 
support your theory. The problem is that the evidence of predictions 1-4 
acts merely as a correlate. It does not test any particular critical 
dependency (causality origins). The predictions are merely correlates of 
any theory of consciousness. They do not test the causal necessities. In 
any empirical science paper the evidence could not be held in support of 
the claim and they would be would be discounted as evidence of your 
mechanism. I could cite 10 different computationalist AGI knowledge 
metaphors in the sections preceding the 'predictions' and the result 
would be the same.


SoIf I was a reviewer I'd be unable to accept the claim that your 
'predictions' actually said anything about the theory preceding them. 
This would seem to be the problematic issue of the paper. You might want 
to take a deeper look at this issue and try to isolate something unique 
to your particular solution - which has  a real critical dependency in 
it. Then you'll  have an evidence base of your own that people can use 
independently. In this way your proposal  could be seen to be scientific 
in the dry empirical sense.


By way of example... a computer program is  not scientific evidence of 
anything. The computer materials, as configured by the program, actually 
causally necessitate the behaviour. The program is a correlate. A 
correlate has the formal evidentiary status of 'hearsay'. This is the 
sense in which I invoke the term 'correlate' above.


BTW I have fallen foul of this problem myself...I had to look elsewhere 
for real critical dependency, like I suggested above. You never know, 
you might find one in there someplace! I found one after a lot of 
investigation. You might, too.


Regards,

Colin Hales


Okay, let me phrase it like this:  I specifically say (or rather I 
should have done... this is another thing I need to make more explicit!) 
that the predictions are about making alterations at EXACTLY the 
boundary of the analysis mechanisms.


So, when we test the predictions, we must first understand the mechanics 
of human (or AGI) cognition well enough to be able to locate the exact 
scope of the analysis mechanisms.


Then, we make the tests by changing things around just outside the reach 
of those mechanisms.


Then we ask subjects (human or AGI) what happened to their subjective 
experiences.  If the subjects are ourselves - which I strongly suggest 
must be the case - then we can ask ourselves what happened to our 
subjective experiences.


My prediction is that if the swaps are made at that boundary, then 
things will be as I state.  But if changes are made within the scope of 
the analysis mechanisms, then we will not see those changes in the qualia.


So the theory could be falsified if changes in the qualia are NOT 
consistent with the theory, when changes are made at different points in 
the system.  The theory is all about the analysis mechanisms being the 
culprit, so in that sense it is extremely falsifiable.


Now, correct me if I am wrong, but is there anywhere else in the 
literature where you have you seen anyone make a prediction that the 
qualia will be changed by the alteration of a specific mechanism, but 
not by other, fairly similar alterations?





Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Dan Dennett [WAS Re: [agi] A paper that actually does solve the problem of consciousness]

2008-11-17 Thread Richard Loosemore

Ben Goertzel wrote:


Ed,

BTW on this topic my view seems closer to Richard's than yours, though 
not anywhere near identical to his either.  Maybe I'll write a blog post 
on consciousness to clarify, it's too much for an email...


I am very familiar with Dennett's position on consciousness, as I'm sure 
Richard is, but I consider it a really absurd and silly argument.  I'll 
clarify in a blog post sometime soon, but I don't have time for it now.


Anyway, arguing that experience basically doesn't exist, which is what 
Dennett does, certainly doesn't solve the hard problem as posed by 
Chalmers ... it just claims that the hard problem doesn't exist...


ben


Agreed.

I like Dennett's analytical style in many ways, but I was disappointed 
when I realized where he was going with the multiple drafts account.


He falls into a classic trap.  Chalmers says: Whooaa!  There is a big, 
3-part problem here:  (1) We can barely even define what we mean by 
consciousness, (2) That fact of its indefinability seems almost 
intrinsic to the definition of it!, and then (3) Nevertheless, most of 
us are convinced that there is something significant that needs to be 
explained here.


So Chalmers is *pointing* at the dramatic conjunction of the three 
things inexplicability, inexplicability that seems intrinsic to the 
definition and needs to be explained ... and he is saying that these 
three combined make a very, very hard problem.


But then what Dennett does is walk right up and say Whooaa!  There is a 
big problem here:  (1) You can barely even define what you mean by 
consciousness, so you folks are just confused.


Chalmers is trying to get Dennett to go upstairs and look at the problem 
from a higher perspective, but Dennett digs in his heels and insists at 
looking at the problem *only* from the ground floor level.  He can only 
see the fact there is a problem with defining it, he cannot see the fact 
that this problem is itself interesting.


What I have tried to do is take it one step further and say that if we 
understand the nature of the confusion we can actually resolve it 
(albeit in a weird kind of way).






Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore

Matt Mahoney wrote:

--- On Mon, 11/17/08, Richard Loosemore [EMAIL PROTECTED] wrote:

Okay, let me phrase it like this:  I specifically say (or rather I
should have done... this is another thing I need to make more
explicit!) that the predictions are about making alterations at
EXACTLY the boundary of the analysis mechanisms.

So, when we test the predictions, we must first understand the
mechanics of human (or AGI) cognition well enough to be able to
locate the exact scope of the analysis mechanisms.

Then, we make the tests by changing things around just outside the
reach of those mechanisms.

Then we ask subjects (human or AGI) what happened to their 
subjective experiences.  If the subjects are ourselves - which I

strongly suggest must be the case - then we can ask ourselves what
happened to our subjective experiences.

My prediction is that if the swaps are made at that boundary, then
things will be as I state.  But if changes are made within the
scope of the analysis mechanisms, then we will not see those
changes in the qualia.

So the theory could be falsified if changes in the qualia are NOT
consistent with the theory, when changes are made at different
points in the system.  The theory is all about the analysis
mechanisms being the culprit, so in that sense it is extremely
falsifiable.

Now, correct me if I am wrong, but is there anywhere else in the
literature where you have you seen anyone make a prediction that
the qualia will be changed by the alteration of a specific
mechanism, but not by other, fairly similar alterations?


Your predictions are not testable. How do you know if another person
has experienced a change in qualia, or is simply saying that they do?
If you do the experiment on yourself, how do you know if you really
experience a change in qualia, or only believe that you do?

There is a difference, you know. Belief is only a rearrangement of
your neurons. I have no doubt that if you did the experiments you
describe, that the brains would be rearranged consistently with your
predictions. But what does that say about consciousness?


Yikes, whatever happened to the incorrigibility of belief?!

You seem to have a bone or two to pick with Descartes:  please don't ask me!



Richard Loosemore




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore

Harry Chesley wrote:

On 11/14/2008 9:27 AM, Richard Loosemore wrote:


 I completed the first draft of a technical paper on consciousness the
 other day.   It is intended for the AGI-09 conference, and it can be
 found at:

 http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf 



Good paper.

A related question: How do you explain the fact that we sometimes are 
aware of qualia and sometimes not? You can perform the same actions 
paying attention or on auto pilot. In one case, qualia manifest, 
while in the other they do not. Why is that?


I actually *really* like this question:  I was trying to compose an 
answer to it while lying in bed this morning.


This is what I started referring to (in a longer version of the paper) 
as a Consciousness Holiday.


In fact, if start unpacking the idea of what we mean by conscious 
experience, we start to realize that it inly really exists when we look 
at it.  It is not even logically possible to think about consciousness - 
any form of it, including *memories* of the consciousness that I had a 
few minutes ago, when I was driving along the road and talking to my 
companion without bothering to look at several large towns that we drove 
through - without applying the analysis mechanism to the consciousness 
episode.


So when I don't remember anything about those towns, from a few minutes 
ago on my road trip, is it because (a) the attentional mechanism did not 
bother to lay down any episodic memory traces, so I cannot bring back 
the memories and analyze them, or (b) that I was actually not 
experiencing any qualia during that time when I was on autopilot?


I believe that the answer is (a), and that IF I can stopped at any point 
during the observation period and thought about the experience I just 
had, I would be able to appreciate the last few seconds of subjective 
experience.


The real reply to your question goes much much deeper, and it is 
fascinating because we need to get a handle on creatures that probably 
do not do any reflective, language-based philosophical thinking (like 
guinea pigs and crocodiles).  I want to say more, but will have to set 
it down in a longer form.


Does this seem to make sense so far, though?




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore

Harry Chesley wrote:

Richard Loosemore wrote:

I completed the first draft of a technical paper on consciousness the
other day.   It is intended for the AGI-09 conference, and it can be
found at:

http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf


One other point: Although this is a possible explanation for our
subjective experience of qualia like red or soft, I don't see it
explaining pain or happy quite so easily. You can hypothesize a sort
of mechanism-level explanation of those by relegating them to the older
or lower parts of the brain (i.e., they're atomic at the conscious
level, but have more effects at the physiological level (like releasing
chemicals into the system)), but that doesn't satisfactorily cover the
subjective side for me.


I do have a quick answer to that one.

Remember that the core of the model is the *scope* of the analysis 
mechanism.  If there is a sharp boundary (as well there might be), then 
this defines the point where the qualia kick in.  Pain receptors are 
fairly easy:  they are primitive signal lines.  Emotions are, I believe, 
caused by clusters of lower brain structures, so the interface between 
lower brain and foreground is the place where the foreground sees a 
limit to the analysis mechanisms.


More generally, the significance of the foreground is that it sets a 
boundary on how far the analysis mechanisms can reach.


I am not sure why that would seem less satisfactory as an explanation of 
the subjectivity.  It is a raw feel, and that is the key idea, no?




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore

Mark Waser wrote:

An excellent question from Harry . . . .

So when I don't remember anything about those towns, from a few 
minutes ago on my road trip, is it because (a) the attentional 
mechanism did not bother to lay down any episodic memory traces, so I 
cannot bring back the memories and analyze them, or (b) that I was 
actually not experiencing any qualia during that time when I was on 
autopilot?


I believe that the answer is (a), and that IF I can stopped at any 
point during the observation period and thought about the experience I 
just had, I would be able to appreciate the last few seconds of 
subjective experience.


So . . . . what if the *you* that you/we speak of is simply the 
attentional mechanism?  What if qualia are simply the way that other 
brain processes appear to you/the attentional mechanism?


Why would you be experiencing qualia when you were on autopilot?  It's 
quite clear from experiments that human's don't see things in their 
visual field when they are concentrating on other things in their visual 
field (for example, when you are told to concentrate on counting 
something that someone is doing in the foreground while a man in an ape 
suit walks by in the background).  Do you really have qualia from stuff 
that you don't sense (even though your sensory apparatus picked it up, 
it was clearly discarded at some level below the conscious/attentional 
level)?


Yes, I did not mean to imply that all unattended stimuli register in 
consciousness.  Clearly there are things that are simply not seen, even 
when they are in the visual field.


But I would distinguish between that and a situation where you drive for 
50 miles and do not have a memory afterwards of the places you went 
through.  I do not think that we do not see the road and the towns and 
other traffic in the same sense that we do not see an unattended 
stimulus in a dual task experiment, for example.


But then, there are probably intermediate cases.

Some of the recent neural imaging work is relevant in this respect.  I 
will think some more about this whole issue.




Richard Loosemore









- Original Message - From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, November 17, 2008 1:46 PM
Subject: **SPAM** Re: [agi] A paper that actually does solve the problem 
of consciousness




Harry Chesley wrote:

On 11/14/2008 9:27 AM, Richard Loosemore wrote:


 I completed the first draft of a technical paper on consciousness the
 other day.   It is intended for the AGI-09 conference, and it can be
 found at:


http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf 



Good paper.

A related question: How do you explain the fact that we sometimes are 
aware of qualia and sometimes not? You can perform the same actions 
paying attention or on auto pilot. In one case, qualia manifest, 
while in the other they do not. Why is that?


I actually *really* like this question:  I was trying to compose an 
answer to it while lying in bed this morning.


This is what I started referring to (in a longer version of the paper) 
as a Consciousness Holiday.


In fact, if start unpacking the idea of what we mean by conscious 
experience, we start to realize that it inly really exists when we 
look at it.  It is not even logically possible to think about 
consciousness - any form of it, including *memories* of the 
consciousness that I had a few minutes ago, when I was driving along 
the road and talking to my companion without bothering to look at 
several large towns that we drove through - without applying the 
analysis mechanism to the consciousness episode.


So when I don't remember anything about those towns, from a few 
minutes ago on my road trip, is it because (a) the attentional 
mechanism did not bother to lay down any episodic memory traces, so I 
cannot bring back the memories and analyze them, or (b) that I was 
actually not experiencing any qualia during that time when I was on 
autopilot?


I believe that the answer is (a), and that IF I can stopped at any 
point during the observation period and thought about the experience I 
just had, I would be able to appreciate the last few seconds of 
subjective experience.


The real reply to your question goes much much deeper, and it is 
fascinating because we need to get a handle on creatures that probably 
do not do any reflective, language-based philosophical thinking (like 
guinea pigs and crocodiles).  I want to say more, but will have to set 
it down in a longer form.


Does this seem to make sense so far, though?




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore





Sorry for the late reply.  Got interrupted.


Vladimir Nesov wrote:

(I'm sorry that I make some unclear statements on semantics/meaning,
I'll probably get to the description of this perspective later on the
blog (or maybe it'll become obsolete before that), but it's a long
story, and writing it up on the spot isn't an option.)

On Sat, Nov 15, 2008 at 2:18 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

Taking the position that consciousness is an epiphenomenon and is therefore
meaningless has difficulties.


Rather p-zombieness in atom-by-atom the same environment is an epiphenomenon.


By saying that it is an epiphenomenon, you actually do not answer the
questions about instrinsic qualities and how they relate to other things in
the universe.  The key point is that we do have other examples of
epiphenomena (e.g. smoke from a steam train),


What do you mean by smoke being epiphenomenal?


The standard philosophical term, no?  A phenomenon that is associated
with something, but which plays no causal role in the functioning of
that something.

Thus:  smoke coming from a steam train is always there when is running,
but the smoke does not cause the steam train to do anything.  It is just
a byproduct.






but their ontological status
is very clear:  they are things in the world.  We do not know of other
things with such puzzling ontology (like consciousness), that we can use as
a clear analogy, to explain what consciousness is.

Also, it raises the question of *why* there should be an epiphenomenon.
 Calling it an E does not tell us why such a thing should happen.  And it
leaves us in the dark about whether or not to believe that other systems
that are not atom-for-atom identical with us, should also have this
epiphenomenon.


I don't know how to parse the word epiphenomenon in this context. I
use to to describe reference-free, meaningless concepts, so you can't
say that some epiphenomenon is present here or there, that would be
meaningless.


I think the problem is that you are confusing epiphenomenon with 
something else.


Where did you get the idea that an epiphenomenon was a reference-free, 
meaningless concept?  Not from Eliezer's reference-free, meaningless 
ramblings on his blog, I hope?  ;-)





Jumping into molecular framework as describing human cognition is
unwarranted. It could be a description of AGI design, or it could be a
theoretical description of more general epistemology, but as presented
it's not general enough to automatically correspond to the brain.
Also, semantics of atoms is tricky business, for all I know it keeps
shifting with the focus of attention, often dramatically. Saying that
self is a cluster of atoms doesn't cut it.

I'm not sure of what you are saying, exactly.

The framework is general in this sense:  its components have *clear*
counterparts in all models of cognition, both human and machine.  So, for
example, if you look at a system that uses logical reasoning and bare
symbols, that formalism will differentiate between the symbols that are
currently active, and playing a role in the system's analysis of the world,
and those that are not active.  That is the distinction between foreground
and background.


Without a working, functional theory of cognition, this high-level
descriptive picture has little explanatory power. It might be a step
towards developing a useful theory, but it doesn't explain anything.
There is a set of states of mind that correlates with experience of
apples, etc. So what? You can't build a detailed edifice on general
principles and claim that far-reaching conclusions apply to actual
brain. They might, but you need a semantic link from theory to
described functionality.


Sorry, I don't follow you here.

If you think that there was some aspect of the framework that might NOt 
show up in some architecture for a thinking system, you should probably 
point to it.


I think that the architecture was general, but it referred to a specific 
component (the analysis mechanism) that was well-specified enough to be 
usable in the theory.  And that was all I needed.


If there is some specific way that it doesn't work, you will probably 
have to pin it down and tell me, because I don't see it.







As for the self symbol, there was no time to go into detail.  But there
clearly is an atom that represents the self.


*shug*
It only stands as definition, there is no self-neuron, or something
easily identifiable as self, it's a complex thing. I'm not sure I
even understand what self refers to subjectively, I don't feel any
clear focus of self-perception, my experience is filled with thoughts
on many things, some of them involving management of thought process,
some of external concepts, but no unified center to speak of...


No, no:  what I meant by self was that somewhere in the system it must 
have a representation for its own self, or it will have a missing 
concept.  Also, in any system there is a basic source of action  
some place

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore

Colin Hales wrote:



Richard Loosemore wrote:

Colin Hales wrote:

Dear Richard,
I have an issue with the 'falsifiable predictions' being used as 
evidence of your theory.


The problem is that right or wrong...I have a working physical model 
for consciousness. Predictions 1-3 are something that my hardware can 
do easily. In fact that kind of experimentation is in my downstream 
implementation plan. These predictions have nothing whatsoever to do 
with your theory or mine or anyones. I'm not sure about prediction 4. 
It's not something I have thought about, so I'll leave it aside for 
now. In my case, in the second stage of testing of my chips, one of 
the things I want to do is literally 'Mind Meld', forming a bridge of 
4 sets of compared, independently generated qualia. Ultimately the 
chips may be implantable, which means a human could experience what 
they generate in the first person...but I digress


Your statement This theory of consciousness can be used to make some 
falsifiable predictions could be replaced by ANY theory of 
consciousness can be used to make falsifiable predictions 1..4 as 
follows.. Which basically says they are not predictions that falsify 
anything at all. In which case the predictions cannot be claimed to 
support your theory. The problem is that the evidence of predictions 
1-4 acts merely as a correlate. It does not test any particular 
critical dependency (causality origins). The predictions are merely 
correlates of any theory of consciousness. They do not test the 
causal necessities. In any empirical science paper the evidence could 
not be held in support of the claim and they would be would be 
discounted as evidence of your mechanism. I could cite 10 different 
computationalist AGI knowledge metaphors in the sections preceding 
the 'predictions' and the result would be the same.


SoIf I was a reviewer I'd be unable to accept the claim that your 
'predictions' actually said anything about the theory preceding them. 
This would seem to be the problematic issue of the paper. You might 
want to take a deeper look at this issue and try to isolate something 
unique to your particular solution - which has  a real critical 
dependency in it. Then you'll  have an evidence base of your own that 
people can use independently. In this way your proposal  could be 
seen to be scientific in the dry empirical sense.


By way of example... a computer program is  not scientific evidence 
of anything. The computer materials, as configured by the program, 
actually causally necessitate the behaviour. The program is a 
correlate. A correlate has the formal evidentiary status of 
'hearsay'. This is the sense in which I invoke the term 'correlate' 
above.


BTW I have fallen foul of this problem myself...I had to look 
elsewhere for real critical dependency, like I suggested above. You 
never know, you might find one in there someplace! I found one after 
a lot of investigation. You might, too.


Regards,

Colin Hales


Okay, let me phrase it like this:  I specifically say (or rather I 
should have done... this is another thing I need to make more 
explicit!) that the predictions are about making alterations at 
EXACTLY the boundary of the analysis mechanisms.


So, when we test the predictions, we must first understand the 
mechanics of human (or AGI) cognition well enough to be able to locate 
the exact scope of the analysis mechanisms.


Then, we make the tests by changing things around just outside the 
reach of those mechanisms.


Then we ask subjects (human or AGI) what happened to their subjective 
experiences.  If the subjects are ourselves - which I strongly suggest 
must be the case - then we can ask ourselves what happened to our 
subjective experiences.


My prediction is that if the swaps are made at that boundary, then 
things will be as I state.  But if changes are made within the scope 
of the analysis mechanisms, then we will not see those changes in the 
qualia.


So the theory could be falsified if changes in the qualia are NOT 
consistent with the theory, when changes are made at different points 
in the system.  The theory is all about the analysis mechanisms being 
the culprit, so in that sense it is extremely falsifiable.


Now, correct me if I am wrong, but is there anywhere else in the 
literature where you have you seen anyone make a prediction that the 
qualia will be changed by the alteration of a specific mechanism, but 
not by other, fairly similar alterations?





Richard Loosemore

At the risk of lecturing the already-informed ---Qualia generation has 
been highly localised into specific regions in *cranial *brain material 
already. Qualia are not in the periphery. Qualia are not in the spinal 
CNS, Qualia are not in the cranial periphery eg eyes or lips. Qualia are 
generated in specific CNS cortex and basal regions. 


You are assuming that my references to the *foreground* periphery 
correspond to the physical brain's periphery

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Richard Loosemore


This commentary represents a fundamental misunderstanding of both the 
paper I wrote and the background literature on the hard problem of 
consciousness.




Richard Loosemore



Ed Porter wrote:
  I respect the amount of thought that when into Richard’s paper 
“Consciousness in Human and Machine: A Theory and Some Falsifiable 
Predictions” --- but I do not think it provides a good explanation of 
consciousness. 

 

  It seems to spend more time explaining the limitations on what we 
can know about consciousness than explaining consciousness, itself.  
What little the paper says about consciousness can be summed up roughly 
as follows: that consciousness is created by a system that can analyze 
and seek explanations from some, presumably experientially-learned, 
knowledgebase, based on associations between nodes in that 
knowledgebase, and that it can determine when it cannot describe a given 
node further, in terms of relations to other nodes, but nevertheless 
senses the given node is real (such as the way it is difficult for a 
human to explain what it is like to sense the color red).


 

  First I disagree with the paper’s allegation that “analysis” of 
conscious phenomena necessarily “bottom” out more than analyses of many 
other aspects of reality.  Second, I disagree that conscious phenomena 
are beyond any scientific explanation. 

 

  With regard to the first, I feel our minds contain substantial 
memories of various conscious states, and thus there is actually 
substantial experiential grounding of many aspects of consciousness 
recorded in our brains.  This is particularly true for the consciousness 
of emotional states (for example, brain scans on very young infants 
indicate a high percent of their mental activity is in emotional centers 
of the brain).  I developed many of my concepts of how to design an AGI 
based on reading brain science and performing introspection into my own 
conscious and subconscious thought processes, and I found it quite easy 
to draw many generalities from the behavior of my own conscious mind.  
Since I view the subconscious to be at the same time both a staging area 
for, and a reactive audience for, conscious thoughts, I think one has to 
view the subconscious and consciouness as part of a functioning whole. 

 

  When I think of the color red, I don’t bottom out.  Instead I have 
many associations with my experiences of redness that provide it with 
deep grounding.  As with the description of any other concept, it is 
hard to explain how I experience red to others, other than through 
experiences we share relating to that concept.  This would include 
things we see in common to be red, or perhaps common emotional 
experiences to seeing the red of blood that has been spilled in 
violence, or the way the sensation of red seems to fill a 2 dimensional 
portion of an image that we perceive as a two dimensional distribution 
of differently colored areas.   But I can communicate within my own mind 
across time what it is like to sense red, such as in dreams when my eyes 
are closed.  Yes, the experience of sensing red does not decompose into 
parts, such as the way the sensed image of a human body can be 
de-composed into the seeing of subordinate parts, but that does not 
necessarily mean that my sensing of something that is a certain color of 
red, is somehow more mysterious than my sensing of seeing a human body.


 

  With regard to the second notion, that conscious phenomena are not 
subject to scientific explanation, there is extensive evidence to the 
contrary.  The prescient psychological writings of William James, and 
Dr. Alexander Luria’s famous studies of the effects of variously located 
bullet wounds on the minds of Russian soldiers after World War II, both 
illustrate that human consciousness can be scientifically studied.  The 
effects of various drugs on consciousness have been scientifically 
studied.  Multiple experiments have shown that the presence or absence 
of synchrony between neural firings in various parts of the brain have 
been strongly correlated with human subjects reporting the presence or 
absence, respectively, of conscious experience of various thoughts or 
sensory inputs.  Multiple studies have shown that electrode stimulation 
to different parts of the brain tend to make the human consciousness 
aware of different thoughts.  Our own personal experiences with our own 
individual consciousnesses, the current scientific levels of knowledge 
about commonly reported conscious experiences, and increasingly more 
sophisticated ways to correlate objectively observable brain states with 
various reports of human conscious experience, all indicate that 
consciousness already is subject to scientific explanation.  In the 
future, particularly with the advent of much more sophisticated brain 
scanning tools, and with the development of AGI, consciousness will be 
much more subject to scientific explanation

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Richard Loosemore

John G. Rose wrote:

From: Richard Loosemore [mailto:[EMAIL PROTECTED]

I completed the first draft of a technical paper on consciousness the
other day.   It is intended for the AGI-09 conference, and it can be
found at:

http://susaro.com/wp-
content/uploads/2008/11/draft_consciousness_rpwl.pdf




Um... this is a model of consciousness. One way of looking at it.
Whether or not it is comprehensive enough, not sure, this irreducible
indeterminacy. But after reading the paper a couple times I get what you are
trying to describe. It's part of an essence of consciousness but not sure if
it enough.


But did you notice that the paper argued that if you think on the base 
level, you would have to have that feeling that, as you put it, ...It's 
part of an essence of consciousness but not sure if it enough.?


The question is:  does the explanation seem consistent with an 
explanation of your feeling that it might not be enough of an explanation?






Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Richard Loosemore

Matt Mahoney wrote:

--- On Sat, 11/15/08, Richard Loosemore [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:

--- On Sat, 11/15/08, Richard Loosemore [EMAIL PROTECTED]
wrote:


This is equivalent to your prediction #2 where connecting the
output of neurons that respond to the sound of a cello to the
input of neurons that respond to red would cause a cello to
sound red. We should expect the effect to be temporary.

I'm not sure how this demonstrates consciousness. How do you
test that the subject actually experiences redness at the
sound of a cello, rather than just behaving as if
experiencing redness, for example, claiming to hear red?

You misunderstand the experiment in a very intersting way!

This experiment has to be done on the *skeptic* herself!

The prediction is that if *you* get your brain rewired, *you*
will experience this.

How do you know what I experience, as opposed to what I claim to
experience?

That is exactly the question you started with, so you

haven't gotten anywhere. I don't need proof that I experience
things. I already have that belief programmed into my brain.

Huh?

Now what are we talking about... I am confused:  I was talking
about proving my prediction.  I simply replied to your doubt about
whether a subject woudl be experiencing the predicted effects, or
just producing language consistent with it.  I gave you a solution
by pointing out that anyone who had an interest in the prediction
could themselves join in and be a subject.  That seemed to answer
your original question.


You are confusing truth and belief. I am not asking you to make me
believe that consciousness (that which distinguishes you from a
philosophical zombie) exists. I already believe that. I am asking you
to prove it. You haven't done that. I don't believe you can prove the
existence of anything that is both detectable and not detectable.


You are stuck in Level 0.

I showed something a great deal more sophisticated.  In fact, I 
explicitly agreed with you on a Level 0 version of what you just said: 
I actually said in the paper that I (and anyone else) cannot explain 
these phenomena qua the (Level 0) things that they appear to be.


But I went far beyond that:  I explained why people have difficulty 
defining these terms, and I explained a self-consistent understanding of 
the nature of consciousness that involves it being classified as a novel 
type of thing.


You cannot define in properly.

I can explain why you cannot define in properly.

I can both define and explain it, and part of that explanation is that 
the very nature of explanation is bound up in the solution.


But instead of understanding that the nature of explanation has to 
change to deal with the problem, you remain stuck with the old, broken 
idea of explanation, and keep trying to beat the argument with it!




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Richard Loosemore

Ed Porter wrote:

Richard,

You have provided no basis for your argument that I have misunderstood 
your paper and the literature upon which it is based.


[snip]

My position is that we can actually describe a fairly large number of 
characteristics of our subjective experience consciousness that most 
other intelligent people agree with.  Although we cannot know that 
others experience the color red exactly the same way we do, we can 
determine that there are multiple shared describable characteristics 
that most people claim to have with regard to their subjective 
experiences of the color red.


This is what I meant when I said that you had completely misunderstood 
both my paper and the background literature:  the statement in the above 
paragraph could only be written by a person who does not understand the 
distinction between the Hard Problem of consciousness (this being 
David Chalmers' term for it) and the Easy problems.


The precise definition of qualia, which everyone agrees on, and which 
you are flatly contradicting here, is that these things do not involve 
anything that can be compared across individuals.


Since this an utterly fundamental concept, if you do not get this then 
it is almost impossible to discuss the topic.


Matt just tried to explain it to you.  You did not get it even then.




Richard Loosemore














---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Richard Loosemore

Ben Goertzel wrote:



Ed / Richard,

It seems to me that Richard's propsal is in large part a modernization 
of Peirce's metaphysical analysis of awareness.


Peirce introduced foundational metaphysical categories of First, Second 
and Third ... where First is defined as raw unanalyzable awareness/being ...


http://www.helsinki.fi/science/commens/terms/firstness.html

To me, Richard's analysis sounds a lot like Peirce's statement that 
consciousness is First...


And Ed's refutation sounds like a rejection of First as a meaningful 
category, and an attempt to redirect the conversation to the level of 
Third...


Sorry to be negative, but no, my proposal is not in any way a 
modernization of Peirce's metaphysical analysis of awareness.


The standard meaning of Hard Problem issues was described very well by 
Chalmers, and I am addressing the hard problem of concsciousness, not 
the other problems.


Ed is talking about consciousness in a way that plainly wanders back and 
forth between Hard Problem issues and Easy Problem, and as such he has 
misunderstood the entirety of what I wrote in the paper.


It might be arguable that my position relates to Feigl, but even that is 
significantly different.






Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Richard Loosemore

Mike Tintner wrote:
Richard:The precise definition of qualia, which everyone agrees on, 
and which

you are flatly contradicting here, is that these things do not involve
anything that can be compared across individuals.

Actually, we don't do a bad job of comparing our emotions/sensations - 
not remotely perfect, but not remotely as bad as the above philosophy 
would suggest. We do share each other's pains and joys to a remarkable 
extent. That's because our emotions are very much materially based and 
we share basically the same bodies and nervous systems.


The hard problem of consciousness is primarily about *not* 
qualia/emotions/sensations but *sentience*  - not about what a red bus 
or a warm hand stroking your face feel like to you, but about your 
capacity to feel anything at all - about your capacity not for 
particular types of emotions/sensations, but for emotion generally.


Sentience resides to a great extent in the nervous system, and whatever 
proto-nervous system preceded it in evolution. When we solve how that 
works we may solve the hard problem. Unless you believe that every thing 
including inanimate objects, feels, then the capacity of sentience 
clearly evolved and has an explanation.


(Bear in mind that AGI-ers' approaches to the problem of consciousness 
are bound to be limited by their disembodied and anti-evolutionary 
prejudices).


Mike

Hard Problem is a technical term.

It was invented by David Chalmers, and it has a very specific meaning.

See the Chalmers reference in my paper.




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Richard Loosemore


Three things.


First, David Chalmers is considered one of the world's foremost 
researchers in the consciousness field (he is certainly now the most 
celebrated).  He has read the argument presented in my paper, and he has 
discussed it with me.  He understood all of it, and he does not share 
any of your concerns, nor anything remotely like your concerns.  He had 
one single reservation, on a technical point, but when I explained my 
answer, he thought it interesting and novel, and possibly quite valid.


Second, the remainder of your comments below are not coherent enough to 
be answerable, and it is not my job to walk you through the basics of 
this field.


Third, about your digression:  gravity does not escape from black 
holes, because gravity is just the curvature of spacetime.  The other 
things that cannot escape from black holes are not forces.


I will not be replying to any further messages from you because you are 
wasting my time.




Richard Loosemore





Ed Porter wrote:

Richard,

 

Thank you for your reply. 

 

It implies your article was not as clearly worded as I would have liked 
it to have been, given the interpretation you say it is limited to.  
When you said


 

subjective phenomena associated with consciousness ... have the special 
status of being unanalyzable. (last paragraph in the first column of 
page 4 of your paper.) 



  you apparently meant something much more narrow, such as

 

subjective phenomena associated with consciousness [of the type that 
cannot be communicated between people --- and/or --- of the type that 
are unanalyzable] ... have the special status of being unanalyzable.


 

If you always intended that all your statements about the limited 
ability to analyze conscious phenomena be so limited --- then you were 
right --- I misunderstood your article, at least partially. 

 

We could argue about whether a reader should have understood this narrow 
interpretation.  But it should be noted Wikipedia, that unquestionable 
font of human knowledge, states “qualia” has multiple definitions, only 
some of which matche the meaning you claim “everyone agrees upon.”, 
i.e., subjective experiences that “do not involve anything that can be 
compared across individuals.” 

 

And in Wikipedia’s description of Chalmers’ hard problem of 
consciousness, it lists questions that arguably would be covered by my 
interpretation.


 

It is your paper, and it is up to you to decide how you define things, 
and how clearly you make your definitions known.  But even given your 
narrow interpretation of conscious phenomena in your paper, I think 
there are important additional statements that can be made concerning it.


 

First given some of the definitions of Chalmers hard problem it is not 
clear how much your definition adds.


 

Second, and more importantly, I do not think there is a totally clear 
distinction between Chalmers’ “hard problem of consciousness” and what 
he classifies as the easy problems of consciousness.  For example, the 
first two paragraphs on the second page of your paper seem to be 
discusses the unanalyzable nature of the hard problem.  This includes 
the following statement:


 

“…for every “objective” definition that has ever been proposed [for the 
hard problem], it seems, someone has countered that the real mystery has 
been side-stepped by the definition.”


 

If you define the hard problem of consciousness as being those aspects 
of consciousness that cannot be physically explained, it is like the 
hard problems concerning physical reality.  It would seem that many key 
aspects of physical reality are equally


 

“intrinsically beyond the reach of objective definition, while at the 
same time being as deserving of explanation as anything else in the 
universe” (Second paragraph on page 2 of your paper).


 

Over time we have explained more and more about concepts at the heart of 
physical reality such as time, space, existence, but always some mystery 
remains.  I think the same will be true about consciousness.  In the 
coming decades we will be able to explain more and more about 
consciousness, and what is covered by the “hard problem” (i.e., that 
which is unexplainable) will shrink, but there will always remain some 
mystery.  I believe that within decades two to six decades we will


 

--be able to examine the physical manifestations of aspects of qualia 
that now cannot now be communicated between people (and thus now fit 
within your definition of qualia);


 

--have an explanation for most of the major types of subjectively 
perceived properties and behaviors of consciousness; and


 

--be able to posit reasonable theories about why we experience 
consciousness as a sense of awareness and how the various properties of 
that sense of awareness are created.


 

But I believe there will always remain some mysteries, such as why there 
is any existence of anything, why there is any separation of anything, 
why there is any

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-15 Thread Richard Loosemore
 
dead-end concept-atoms.






Richard Loosemore







On Fri, Nov 14, 2008 at 11:44 PM, Matt Mahoney [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


--- On Fri, 11/14/08, Richard Loosemore [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
 
http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf

Interesting that some of your predictions have already been tested,
in particular, synaesthetic qualia was described by George Stratton
in 1896. When people wear glasses that turn images upside down, they
adapt after several days and begin to see the world normally.

http://www.cns.nyu.edu/~nava/courses/psych_and_brain/pdfs/Stratton_1896.pdf

http://www.cns.nyu.edu/%7Enava/courses/psych_and_brain/pdfs/Stratton_1896.pdf
http://wearcam.org/tetherless/node4.html

This is equivalent to your prediction #2 where connecting the output
of neurons that respond to the sound of a cello to the input of
neurons that respond to red would cause a cello to sound red. We
should expect the effect to be temporary.

I'm not sure how this demonstrates consciousness. How do you test
that the subject actually experiences redness at the sound of a
cello, rather than just behaving as if experiencing redness, for
example, claiming to hear red?

I can do a similar experiment with autobliss (a program that learns
a 2 input logic function by reinforcement). If I swapped the inputs,
the program would make mistakes at first, but adapt after a few
dozen training sessions. So autobliss meets one of the requirements
for qualia. The other is that it be advanced enough to introspect on
itself, and that which it cannot analyze (describe in terms of
simpler phenomena) is qualia. What you describe as elements are
neurons in a connectionist model, and the atoms are the set of
active neurons. Analysis means describing a neuron in terms of its
inputs. Then qualia is the first layer of a feedforward network. In
this respect, autobliss is a single neuron with 4 inputs, and those
inputs are therefore its qualia.

You might object that autobliss is not advanced enough to ponder its
own self existence. Perhaps you define advanced to mean it is
capable of language (pass the Turing test), but I don't think that's
what you meant. In that case, you need to define more carefully what
qualifies as sufficiently powerful.


-- Matt Mahoney, [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, 
butcher a hog, conn a ship, design a building, write a sonnet, balance 
accounts, build a wall, set a bone, comfort the dying, take orders, give 
orders, cooperate, act alone, solve equations, analyze a new problem, 
pitch manure, program a computer, cook a tasty meal, fight efficiently, 
die gallantly. Specialization is for insects.  -- Robert Heinlein




*agi* | Archives https://www.listbox.com/member/archive/303/=now 
https://www.listbox.com/member/archive/rss/303/ | Modify 
https://www.listbox.com/member/?; 
Your Subscription	[Powered by Listbox] http://www.listbox.com







---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-15 Thread Richard Loosemore

Matt Mahoney wrote:

--- On Sat, 11/15/08, Richard Loosemore [EMAIL PROTECTED] wrote:


This is equivalent to your prediction #2 where connecting the output
of neurons that respond to the sound of a cello to the input of
neurons that respond to red would cause a cello to sound red. We
should expect the effect to be temporary.

I'm not sure how this demonstrates consciousness. How do you test
that the subject actually experiences redness at the sound of a
cello, rather than just behaving as if experiencing redness, for
example, claiming to hear red?

You misunderstand the experiment in a very intersting way!

This experiment has to be done on the *skeptic* herself!

The prediction is that if *you* get your brain rewired,
*you* will experience this.


How do you know what I experience, as opposed to what I claim to experience?

That is exactly the question you started with, so you haven't gotten anywhere. 
I don't need proof that I experience things. I already have that belief 
programmed into my brain.


Huh?

Now what are we talking about... I am confused:  I was talking about 
proving my prediction.  I simply replied to your doubt about whether a 
subject woudl be experiencing the predicted effects, or just producing 
language consistent with it.  I gave you a solution by pointing out that 
anyone who had an interest in the prediction could themselves join in 
and be a subject.  That seemed to answer your original question.




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


[agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Richard Loosemore


I completed the first draft of a technical paper on consciousness the 
other day.   It is intended for the AGI-09 conference, and it can be 
found at:


http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf

The title is Consciousness in Human and Machine: A Theory and Some 
Falsifiable Predictions, and it does solve the problem, believe it or not.


But I have no illusions:  it will be misunderstood, at the very least. 
I expect there will be plenty of people who argue that it does not solve 
the problem, but I don't really care, because I think history will 
eventually show that this is indeed the right answer.  It gives a 
satisfying answer to all the outstanding questions and it feels right.


Oh, and it does make some testable predictions.  Alas, we do not yet 
have the technology to perform the tests yet, but the predictions are on 
the table, anyhow.


In a longer version I would go into a lot more detail, introducing  the 
background material at more length, analyzing the other proposals that 
have been made and fleshing out the technical aspects along several 
dimensions.  But the size limit for the conference was 6 pages, so that 
was all I could cram in.






Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Richard Loosemore

Derek Zahn wrote:
Oh, one other thing I forgot to mention.  To reach my cheerful 
conclusion about your paper, I have to be willing to accept your model 
of cognition.  I'm pretty easy on that premise-granting, by which I 
mean that I'm normally willing to go along with architectural 
suggestions to see where they lead.  But I will be curious to see 
whether others are also willing to go along with you on your generic  
cognitive system model.




That's an interesting point.

In fact, the argument doesn't change too much if we go to other models 
of cognition, it just looks different ... and more complicated, which is 
partly why I wanted to stick with my own formalism.


The crucial part is that there has to be a very powerful mechanism that 
lets the system analyze its own concepts - it has to be able to reflect 
on its own knowledge in a very recursive kind of way.  Now, I think that 
Novamente, OpenCog and other systems will eventually have that sort of 
capability because it is such a crucial part of the general bit in 
artificial general intelligence.


Once a system has that mechanism, I can use it to take the line I took 
in the paper.


Also, the generic model of cognition was useful to me in the later part 
of the paper where I want to analyze semantics.  Other AGI architectures 
(logical ones for example) implicitly stick with the very strict kinds 
of semantics (possible worlds, e.g.) that I actually think cannot be 
made to work for all of cognition.


Anyhow, thanks for your positive comments.



Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Richard Loosemore

Robert Swaine wrote:

Conciousness is akin to the phlogiston theory in chemistry.  It is
likely a shadow concept, similar to how the bodily reactions make us
feel that the heart is the seat of emotions.  Gladly, cardiologist
and heart surgeons do not look for a spirit, a soul, or kindness in
the heart muscle.  The brain organ need not contain anything beyond
the means to effect physical behavior,.. and feedback as to those
behavior.

A finite degree of sensory awareness serves as a suitable replacement
for consciousness, in otherwords, just feedback.

Would it really make a difference if we were all biological machines,
and our perceptions were the same as other animals, or other
designed minds; more so if we were in a simulated existence.  The
search for consciousness is a misleading (though not entirely
fruitless) path to AGI.


Well, with respect, it does sound as though you did not read the paper
itself, or any of the other books like Chalmers' Conscious Mind.

I say this because there are lengthy (and standard) replies to the 
points that you make, both in the paper and in the literature.


And, please don't misunderstand: this is not a path to AGI.  Just an 
important side issue that the geneal public cares about enormously.




Richard Loosemore



--- On Fri, 11/14/08, Richard Loosemore [EMAIL PROTECTED] wrote:


From: Richard Loosemore [EMAIL PROTECTED] Subject: [agi] A paper
that actually does solve the problem of consciousness To:
agi@v2.listbox.com Date: Friday, November 14, 2008, 12:27 PM I
completed the first draft of a technical paper on consciousness the
 other day.   It is intended for the AGI-09 conference, and it can
be found at:

http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf


The title is Consciousness in Human and Machine: A Theory and Some
 Falsifiable Predictions, and it does solve the problem, believe
it or not.

But I have no illusions:  it will be misunderstood, at the very
least. I expect there will be plenty of people who argue that it 
does not solve the problem, but I don't really care, because I

think history will eventually show that this is indeed the right
answer.  It gives a satisfying answer to all the outstanding
questions and it feels right.

Oh, and it does make some testable predictions.  Alas, we do not
yet have the technology to perform the tests yet, but the 
predictions are on the table, anyhow.


In a longer version I would go into a lot more detail, introducing
the background material at more length, analyzing the other 
proposals that have been made and fleshing out the technical

aspects along several dimensions.  But the size limit for the
conference was 6 pages, so that was all I could cram in.





Richard Loosemore


--- agi Archives:
https://www.listbox.com/member/archive/303/=now RSS Feed:
https://www.listbox.com/member/archive/rss/303/ Modify Your
Subscription: https://www.listbox.com/member/?; Powered by Listbox:
http://www.listbox.com




--- agi Archives:
https://www.listbox.com/member/archive/303/=now RSS Feed:
https://www.listbox.com/member/archive/rss/303/ Modify Your
Subscription:
https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Richard Loosemore

John G. Rose wrote:

From: Richard Loosemore [mailto:[EMAIL PROTECTED]

John LaMuth wrote:

Reality check ***

Consciousness is an emergent spectrum of subjectivity spanning 600

mill.

years of
evolution involving mega-trillions of competing organisms, probably
selecting
for obscure quantum effects/efficiencies

Our puny engineering/coding efforts could never approach this - not

even

in a million years.

An outwardly pragmatic language simulation, however, is very do-able.

John LaMuth

It is not.

And we can.



I thought what he said was a good description more or less. Out of 600
millions years there may be only a fraction of that which is an improvement
but it's still there.

How do you know, beyond a reasonable doubt, that any other being is
conscious? 


The problem is, you have to nail down exactly what you *mean* by the 
word conscious before you start asking questions or making statements. 
 Once you start reading about and thinking about all the attempts that 
have been made to get specific about it, some interesting new answers to 
simple questions like this begin to emerge.


What I am fighting here is a tendency for some people to use 
wave-of-the-hand definitions that only capture a fraction of a percent 
of the real meaning of the term.  And sometimes not even that.




At some point you have to trust that others are conscious, in the same
species, you bring them into your recursive loop of consciousness
component mix.

A primary component of consciousness is a self definition. Conscious
experience is unique to the possessor. It is more than a belief that the
possessor herself is conscious but others who appear conscious may be just
that, appearing to be conscious. Though at some point there is enough
feedback between individuals and/or a group to share consciousness
experience.

Still though, is it really necessary for an AGI to be conscious? Except for
delivering warm fuzzies to the creators? Doesn't that complicate things?
Shouldn't the machines/computers be slaves to man? Or will they be
equal/superior. It's a dog-eat-dog world out there.


One of the main conclusions of the paper I am writing now is that you 
will (almost certainly) have no choice in the matter, because a 
sufficiently powerful type of AGI will be conscious whether you like it 
or not.


The question of slavery is completely orthogonal.




I just want things to be taken care of and no issues. Consciousness brings
issues. Intelligence and consciousness are separate.



Back to my first paragraph above:  until you have thought carefully 
about what you mean by consciousness, and have figured out where it 
comes from, you can't really make a definitive statement like that, surely?


And besides, the wanting to have things taken care of bit is a separate 
issue.  That is not a problem, either way.



Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Richard Loosemore

Matt Mahoney wrote:

--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:

--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:


Your 'belief' explanation is a cop-out because it does not address
any of the issues that need to be addressed for something to count
as a definition or an explanation of the facts that need to be
explained.

As I explained, animals that have no concept of death have
nevertheless evolved to fear most of the things that can kill them.
Humans have learned to associate these things with death, and
invented the concept of consciousness as the large set of features
which distinguishes living humans from dead humans. Thus, humans fear
the loss or destruction of consciousness, which is equivalent to
death.

Consciousness, free will, qualia, and good and bad are universal
human beliefs. We should not confuse them with truth by asking the
wrong questions. Thus, Turing sidestepped the question of can
machines think? by asking instead can machines appear to think?
Since we can't (by definition) distinguish doing something from
appearing to do something, it makes no sense for us to make this
distinction.

The above two paragraphs STILL do not address any of the
issues that need to be addressed for something to count as a
definition, or an explanation of the facts that need to be
explained.


And you STILL have not defined what consciousness is.


Logically, I don't need to define something to point out that your 
definition fails to address any of the issues that I can read about in 
e.g. Chalmers' book on the subject.  ;-)





Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Richard Loosemore

Jiri Jelinek wrote:

On Wed, Nov 12, 2008 at 2:41 AM, John G. Rose [EMAIL PROTECTED] wrote:

is it really necessary for an AGI to be conscious?


Depends on how you define it.


H interesting angle.  Everything you say from this point on 
seems to be predicated on the idea that a person can *choose* to define 
it any way they want, and then run with their definition.


I notice that this is not possible with any other scientific concept - 
we don't just define an electron as Your Plastic Pal Who's Fun To Be 
With and then start drawing conclusions.


The same is true of consciousness.



Richard Loosemore







If you think it's about feelings/qualia
then - no - you don't need that [potentially dangerous] crap + we
don't know how to implement it anyway.
If you view it as high-level built-in response mechanism (which is
supported by feelings in our brain but can/should be done differently
in AGI) then yes - you practically (but not necessarily theoretically)
need something like that for performance. If you are concerned about
self-awareness/consciousness then note that AGI can demonstrate
general problem solving without knowing anything about itself (and
about many other particular concepts). The AGI just should be able to
learn new concepts (including self), though I think some built-in
support makes sense in this particular case. BTW for the purpose of my
AGI RD I defined self-awareness as a use of an internal
representation (IR) of self, where the IR is linked to real features
of the system. Nothing terribly complicated or mysterious about that.


Doesn't that complicate things?


it does


Shouldn't the machines/computers be slaves to man?


They should and it shouldn't be viewed negatively. It's nothing more
than a smart tool. Changing that would be a big mistake IMO.


Or will they be equal/superior.


Rocks are superior to us in being hard. Cars are superior to us when
it comes to running fast. AGIs will be superior to us when it comes to
problem solving.
So what? Equal/superior in whatever - who cares as long as we can
progress  safely enjoy life - which is what our tools (including AGI)
are being designed to help us with.

Regards,
Jiri Jelinek


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Richard Loosemore

Jiri Jelinek wrote:

Richard,


Everything you say from this point on seems to be predicated on the idea that a 
person can *choose* to define it any way they want


There are some good-to-stick-with rules for definitions
http://en.wikipedia.org/wiki/Definition#Rules_for_definition_by_genus_and_differentia
but (even though it's not desirable) in some cases it's IMO ok for
researchers to use a bit different definitions. If you can give us the
*ultimate* definition of consciousness then I would certainly be
interested. I promise I'll not ask for the ultimate cross-domain
definition of every single word used in that definition ;-)


Hey, no problem, but I'm now embarrassed and in an awkward position, 
because I am literally trying to do that.  I am trying to sort the 
problem out once and for all.  I am finishing it for submission to 
AGI-09, so it will be done, ready or not, by the end of today.


This is something I started as a student essay in 1986, but I have been 
trying to nail down a testable prediction that can be applied today, 
rather than in 20 years time.  I do have testable predictions, but not 
ones that can be tested today, alas.


As for the question about definitions, sure, it is true that the rules 
are not cut in stone for how to do it.  It's just that consciousness is 
a rats nest of conflicting definitions 



Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


  1   2   3   4   5   6   7   8   >