Vladimir Nesov wrote:
On Thu, Jan 15, 2009 at 4:34 AM, Richard Loosemore r...@lightlink.com wrote:
Vladimir Nesov wrote:
On Thu, Jan 15, 2009 at 3:03 AM, Richard Loosemore r...@lightlink.com
wrote:
The whole point about the paper referenced above is that they are
collecting
(in a large number
that covaries with novelty is like shooting fish in a
barrel.
Of course, it's not like these are the only people making this kind of
non-progress ;-)
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
of the field.
We've attacked from a different direction, but we had a wide range of
targets to choose, believe me.
The short version of the overall story is that neuroscience is out of
control as far as overinflated claims go.
Richard Loosemore
---
agi
Vladimir Nesov wrote:
On Wed, Jan 14, 2009 at 10:59 PM, Richard Loosemore r...@lightlink.com wrote:
For anyone interested in recent discussions of neuroscience and the level of
scientific validity to the various brain-scann claims, the study by Vul et
al, discussed here:
http
Vladimir Nesov wrote:
On Thu, Jan 15, 2009 at 3:03 AM, Richard Loosemore r...@lightlink.com wrote:
The whole point about the paper referenced above is that they are collecting
(in a large number of cases) data that is just random noise.
So what? The paper points out a methodological problem
when it could boast a pickup efficiency of 95%. But I have had
the unenviable task of proofreading an entire (Welsh) dictionary in
which the OCR did 95% of the work and I did the other 5%. It was a
nightmare.
That last 5% is where all the action is.
Richard Loosemore
in your posts.
The usual etiquette is to put them on a web server somewhere and give
pointers in your message sent to the list.
Thankyou
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
a little
explanation - anyone in the AI community).
let's take an actual example of good creative thinking happening on the
fly - and what I've called directed free association -
It's by one Richard Loosemore. You as well as others thought pretty
creatively about the problem of the engram
Loosemore
Richard Loosemore wrote:
Harry Chesley wrote:
On 1/9/2009 9:45 AM, Richard Loosemore wrote:
There are certainly experiments that might address some of your
concerns, but I am afraid you will have to acquire a general
knowledge of what is known, first, to be able to make sense
it dequark the tachyon antimatter containment field?
Richard Loosemore
Mark Waser wrote:
But how can it dequark the tachyon antimatter containment field?
Richard,
You missed Mike Tintner's explanation . . . .
You're not thinking your argument through. Look carefully at my
, but I am afraid you will have to acquire a general knowledge
of what is known, first, to be able to make sense of what they might
tell you. There is nothing that can be plucked and delivered as a
direct answer.
Richard Loosemore
---
agi
Archives
Harry Chesley wrote:
On 1/9/2009 9:45 AM, Richard Loosemore wrote:
There are certainly experiments that might address some of your
concerns, but I am afraid you will have to acquire a general
knowledge of what is known, first, to be able to make sense of what
they might tell you
that.
But how can it dequark the tachyon antimatter containment field?
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https
Jim Bromer wrote:
On Mon, Dec 29, 2008 at 4:02 PM, Richard Loosemore r...@lightlink.com wrote:
My friend Mike Oaksford in the UK has written several
papers giving a higher level cognitive theory that says that people are, in
fact, doing something like bayesian estimation when then make
it.
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
the exact details of how the analysis mechanism gets
implemented in the brain. The same is true of the other predictions).
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member
know if you eventually come to a
different conclusion.
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com
Steve Richfield wrote:
Richard,
On 12/25/08, *Richard Loosemore* r...@lightlink.com
mailto:r...@lightlink.com wrote:
Steve Richfield wrote:
Ben, et al,
After ~5 months of delay for theoretical work, here are the
basic ideas as to how really fast and efficient
they got that way by having a low tolerance for fools,
nonsense and people who can't tell the difference between the critique
of an idea and a personal insult.
;-)
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
Why is it that people who repeatedly resort to personal abuse like this
are still allowed to participate in the discussion on the AGI list?
Richard Loosemore
Ed Porter wrote:
Richard,
You originally totally trashed Tononi's paper, including its central
core, by saying
of my time.
With other papers that contain more coherent substance, but perhaps what
looks like an error, I would make the effort. But not this one.
It will have to be left as an exercise for the reader, I'm afraid.
Richard Loosemore
P.S. A hint. All I remember was that he started talking
that their thoughts are still
ungelled.
Anyhow, that's my quick thoughts on him. I'll see if I can dig out his
book at some point.
Richard Loosemore
On Tue, Dec 23, 2008 at 9:53 AM, Richard Loosemore r...@lightlink.com
mailto:r...@lightlink.com wrote:
Ed Porter wrote
Tononi's work, because I listened to him give a talk about
consciousness once. It was *computationally* incoherent.
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303
that will give these people the
ability to think about intelligent systems in new ways.
That is why I am working on Safaire.
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive
by their weak ability to build software systems.
In this case, the science is being crippled by the lack of tools, so
there is no such thing as premature attention to engineering.
Richard Loosemore
ben g
On Mon, Dec 22, 2008 at 9:03 AM, Richard Loosemore r...@lightlink.com
mailto:r
or closer together.
How about:
http://www.geekologie.com/2006/06/nanoparticles_give_robots_prec.php
or
http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=163701010
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive
obnoxiousness, non-Americans
interpret more seriously.
Richard Loosemore
I think we had some mutual colleagues in the past who favored such a
style of discourse ;-)
ben
On Fri, Dec 19, 2008 at 1:49 PM, Pei Wang mail.peiw...@gmail.com
mailto:mail.peiw...@gmail.com wrote:
On Fri, Dec
is an exacerbating factor, is all.
Richard Loosemore
On Fri, Dec 19, 2008 at 7:01 PM, Ben Goertzel b...@goertzel.org wrote:
And when a Chinese doesn't answer a question, it usually means No ;-)
Relatedly, I am discussing with some US gov't people a potential project
involving customizing
|-)
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered
Rafael C.P. wrote:
Cognitive computing: Building a machine that can learn from experience
http://www.physorg.com/news148754667.html
Neuroscience vaporware.
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed
.
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered
data on it, whereas the implication in what you just said was
that this floppy disk could be used to transfer the contents of the
Googleplex :-). Not so fast, I say.
Richard Loosemore
On 12/11/08, Terren Suydam ba...@yahoo.com wrote:
After talking to an old professor of mine
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered
Steve Richfield wrote:
Matt,
On 12/6/08, *Matt Mahoney* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
--- On Sat, 12/6/08, Steve Richfield [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Internet AGIs are the technology of the future, and always will
be. There will
that?
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered
. Not just [memory-for-your-first-kiss] affecting the DNA, but the
whole shebang.
If it turns out that this is the correct interpretation, then this is
one hell of a historic moment.
I must say, I am still a little skeptical, but we'll see how it plays out.
Richard Loosemore
Ben Goertzel
Harry Chesley wrote:
On 12/3/2008 8:11 AM, Richard Loosemore wrote:
Am I right in thinking that what these people:
http://www.newscientist.com/article/mg20026845.000-memories-may-be-stored-on-your-dna.html
are saying is that memories can be stored as changes in the DNA
inside neurons
Philip Hunt wrote:
2008/12/3 Richard Loosemore [EMAIL PROTECTED]:
http://www.newscientist.com/article/mg20026845.000-memories-may-be-stored-on-your-dna.html
are saying is that memories can be stored as changes in the DNA inside
neurons?
No. They are saying memories might be stored as changes
in half.
All fun and interesting, but now back to the real AGI
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https
the knowledge storage used by individuals yet, this is still
possible.
There: I invented a possible mechanism.
Does it work?
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member
that something is snipping and recombining the actual
code of the junk DNA, only that the state of the switches is being
used to code for something.
Question is: can the state of the switches be preserved during reproduction?
Richard Loosemore
---
agi
to escape some people.
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=120640061
the original Quiroga et al paper, and all
the criticism directed against our paper on this list, in the last week
or so, has completely ignored the actual content of that argument.
Richard Loosemore
On Mon, Nov 24, 2008 at 1:32 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Ben
://hyperlogic.blogspot.com/
at a minimum wittgenstein's Brown Book should be required reading for
all AGI list members
Read it. Along with pretty much everything else he wrote (that is in
print, anyhow).
Calling things a category error is a bit of a cop out.
Richard Loosemore
where the University is!), but it would be
dangerous to assume that we can sort the wheat from the chaff and get it
right every time, no?
Richard Loosemore
On Tue, Nov 25, 2008 at 3:46 PM, Richard Loosemore [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Tudor Boloni wrote
the clock back to an era when we knew very little about what might be
going on.
If Quiroga et al do a better job now, then that is all to the good. But
Harley and I had a broader perspective, and we feel that the overall
standards are pretty low.
Richard Loosemore
/2008/11/draft_consciousness_rpwl.pdf
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id
data: they tried to say what that
data implied.
Richard Loosemore
Their conclusion, to quote them, is that
How neurons encode different percepts is one of the most intriguing
questions in neuroscience. Two extreme hypotheses are
schemes based on the explicit representations
Steve Richfield wrote:
Richard,
On 11/20/08, *Richard Loosemore* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Steve Richfield wrote:
Richard,
Broad agreement, with one comment from the end of your posting...
On 11/20/08, *Richard Loosemore* [EMAIL
are saying that when they talk about the spike trains encoding
bayesian contingencies, they NEVER mean, or imply, contingencies between
concepts?
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
Vladimir Nesov wrote:
On Fri, Nov 21, 2008 at 8:09 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Ben Goertzel wrote:
Richard,
My point was that there are essentially no neuroscientists out there
who believe that concepts are represented by single neurons. So you
are in vehement agreement
Vladimir Nesov wrote:
On Fri, Nov 21, 2008 at 8:34 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
No, object-concepts and the like. Not place, motion or action 'concepts'.
For example, Quiroga et al showed their subjects pictures of famous places
and people, then made assertions about how
encode relationships between concepts.
And yet now you make another assertion about something that you think is
well known among neuroscientists, while completely ignoring the actual
argument that Harley and I brought to bear on this issue.
Richard Loosemore
ben g
On Fri, Nov 21, 2008 at 1
Ben Goertzel wrote:
On Fri, Nov 21, 2008 at 4:44 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Ben Goertzel wrote:
I saw the main point of Richard's paper as being that the available
neuroscience data drastically underdetermines the nature of neural
knowledge representation ... so
Vladimir Nesov wrote:
On Sat, Nov 22, 2008 at 12:30 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
They want some kind of mixture of sparse and multiply redundant and not
distributed. The whole point of what we wrote was that there is no
consistent interpretation of what they tried to give
that it
really does all hang together, and become well defined enough to be both
testable and buildable as a complete AGI.
The paper I wrote with Harley, and the more recent one on consciousness,
were just a couple of opening salvos in that effort.
Richard Loosemore
the additional disadvantage of being utterly filled with
underlines and boldface. He shouts. Not good in something that is
supposed to be a scientific paper.
Sorry, but this is just junk.
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member
: should we use
Emitter-Coupled Logic in the transistors that are in oour computers that
will be running the algorithms.
-|
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive
last effort, but there is a limit to how many
times I can say the same thing and be ignored every time.
Richard Loosemore
If one is talking about the sense of experience and mental associations
a normal human mind associates with the color red, one is talking about
a complex
Steve Richfield wrote:
Richard,
Broad agreement, with one comment from the end of your posting...
On 11/20/08, *Richard Loosemore* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Another, closely related thing that they do is talk about low level
issues witout realizing just how
of this fact because
they do not know enough cognitive science.
Richard Loosemore
I don't think this is the reason. There are plenty of neuroscientists
out there
who know plenty of cognitive science.
I think many neuroscientists just hold different theoretical
presuppositions than
you, for reasons other
Vladimir Nesov wrote:
On Fri, Nov 21, 2008 at 1:40 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
The main problem is that if you interpret spike timing to be playing the
role that you (and they) imply above, then you are commiting yourself to a
whole raft of assumptions about how knowledge
Trent Waddington wrote:
On Fri, Nov 21, 2008 at 11:02 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
Since such luminaries as Jerry Fodor have said much the same thing, I think
I stand in fairly solid company.
Wow, you said Fodor without being critical of his work. Is that legal?
Trent
mechanisms
in the brain.
ENDQUOTE-
Richard Loosemore
On Fri, Nov 21, 2008 at 4:35 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
Vladimir Nesov wrote:
Could you give some references to be specific in what you mean?
Examples of what you consider outdated cognitive theory
in the brain is incoherent
No contest: it is valid there.
But I am only referring to the cases where neuroscientists imply that
what they are talking about are higher level concepts.
This happens extremely frequently.
Richard Loosemore
---
agi
. Just because I used a prticular example of
bottoming-out does not mean that I claimed this was the only way it
could happen.
And, of course, all those other claims of conscious experiences are
widely agreed to be more dilute (less mysterious) than such things as
qualia.
Richard
try to say all of the above in the last post, but you didn't mention
that bit in your reply ;-)
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your
standard of explanation.
I do that in part 2.
So far we have not discussed the whole paper, only part 1.
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify
Richard Loosemore
-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 19, 2008 1:57 PM
To: agi@v2.listbox.com
Subject: Re: [agi] A paper that actually does solve the problem of
consciousness
Ben Goertzel wrote:
Richard,
I
that, there is no single place you can cut off the percept
with one single piece of intervention.
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your
how this would work: crazy people never tell lies, so you'd be
able to nail 'em when they gave the wrong answers.
8-|
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive
Harry Chesley wrote:
Richard Loosemore wrote:
Harry Chesley wrote:
Richard Loosemore wrote:
I completed the first draft of a technical paper on consciousness
the other day. It is intended for the AGI-09 conference, and it
can be found at:
http://susaro.com/wp-content/uploads/2008/11
John G. Rose wrote:
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Three things.
First, David Chalmers is considered one of the world's foremost
researchers in the consciousness field (he is certainly now the most
celebrated). He has read the argument presented in my paper, and he
has
is not just an analogy, as I think you might begin to
guess: there is a deep relationship between these two domains, and I am
still working on a way to link them.
Richard Loosemore.
---
agi
Archives: https://www.listbox.com/member/archive/303
I'll try to rephrase that in the edited version
And I will also try to get the motivation and friendliness paper written
asap, to complement this one.
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS
, does it meta-explain my
subjective experiences if I know why I cannot explain these experiences?
And thence to part two of the paper
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
falsifiable.
Now, correct me if I am wrong, but is there anywhere else in the
literature where you have you seen anyone make a prediction that the
qualia will be changed by the alteration of a specific mechanism, but
not by other, fairly similar alterations?
Richard Loosemore
actually resolve it
(albeit in a weird kind of way).
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member
Matt Mahoney wrote:
--- On Mon, 11/17/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Okay, let me phrase it like this: I specifically say (or rather I
should have done... this is another thing I need to make more
explicit!) that the predictions are about making alterations at
EXACTLY
Harry Chesley wrote:
On 11/14/2008 9:27 AM, Richard Loosemore wrote:
I completed the first draft of a technical paper on consciousness the
other day. It is intended for the AGI-09 conference, and it can be
found at:
http://susaro.com/wp-content/uploads/2008/11
Harry Chesley wrote:
Richard Loosemore wrote:
I completed the first draft of a technical paper on consciousness the
other day. It is intended for the AGI-09 conference, and it can be
found at:
http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf
One other point
.
Richard Loosemore
- Original Message - From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, November 17, 2008 1:46 PM
Subject: **SPAM** Re: [agi] A paper that actually does solve the problem
of consciousness
Harry Chesley wrote:
On 11/14/2008 9:27
it up on the spot isn't an option.)
On Sat, Nov 15, 2008 at 2:18 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
Taking the position that consciousness is an epiphenomenon and is therefore
meaningless has difficulties.
Rather p-zombieness in atom-by-atom the same environment is an epiphenomenon
Colin Hales wrote:
Richard Loosemore wrote:
Colin Hales wrote:
Dear Richard,
I have an issue with the 'falsifiable predictions' being used as
evidence of your theory.
The problem is that right or wrong...I have a working physical model
for consciousness. Predictions 1-3 are something
This commentary represents a fundamental misunderstanding of both the
paper I wrote and the background literature on the hard problem of
consciousness.
Richard Loosemore
Ed Porter wrote:
I respect the amount of thought that when into Richard’s paper
“Consciousness in Human
John G. Rose wrote:
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
I completed the first draft of a technical paper on consciousness the
other day. It is intended for the AGI-09 conference, and it can be
found at:
http://susaro.com/wp-
content/uploads/2008/11/draft_consciousness_rpwl.pdf
Matt Mahoney wrote:
--- On Sat, 11/15/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- On Sat, 11/15/08, Richard Loosemore [EMAIL PROTECTED]
wrote:
This is equivalent to your prediction #2 where connecting the
output of neurons that respond to the sound of a cello
not get this then
it is almost impossible to discuss the topic.
Matt just tried to explain it to you. You did not get it even then.
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
that is
significantly different.
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret
.
See the Chalmers reference in my paper.
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id
to any further messages from you because you are
wasting my time.
Richard Loosemore
Ed Porter wrote:
Richard,
Thank you for your reply.
It implies your article was not as clearly worded as I would have liked
it to have been, given the interpretation you say it is limited to.
When
.
I am especially interested in the fact that there are some vague
consciousness feelings we get: things that are kinda mysterious.
Perhaps they are just these atoms that are one step removed from
dead-end concept-atoms.
Richard Loosemore
On Fri, Nov 14, 2008 at 11:44 PM, Matt Mahoney
Matt Mahoney wrote:
--- On Sat, 11/15/08, Richard Loosemore [EMAIL PROTECTED] wrote:
This is equivalent to your prediction #2 where connecting the output
of neurons that respond to the sound of a cello to the input of
neurons that respond to red would cause a cello to sound red. We
should
cram in.
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
, thanks for your positive comments.
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id
issue that the geneal public cares about enormously.
Richard Loosemore
--- On Fri, 11/14/08, Richard Loosemore [EMAIL PROTECTED] wrote:
From: Richard Loosemore [EMAIL PROTECTED] Subject: [agi] A paper
that actually does solve the problem of consciousness To:
agi@v2.listbox.com Date: Friday
John G. Rose wrote:
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
John LaMuth wrote:
Reality check ***
Consciousness is an emergent spectrum of subjectivity spanning 600
mill.
years of
evolution involving mega-trillions of competing organisms, probably
selecting
for obscure quantum
Matt Mahoney wrote:
--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Your 'belief' explanation is a cop-out because it does not address
any of the issues that need to be addressed
of consciousness.
Richard Loosemore
If you think it's about feelings/qualia
then - no - you don't need that [potentially dangerous] crap + we
don't know how to implement it anyway.
If you view it as high-level built-in response mechanism (which is
supported by feelings in our brain but can/should be done
definitions, sure, it is true that the rules
are not cut in stone for how to do it. It's just that consciousness is
a rats nest of conflicting definitions
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
1 - 100 of 750 matches
Mail list logo