John LaMuth wrote:
- Original Message - From: "Richard Loosemore" <[EMAIL PROTECTED]>
To:
Sent: Wednesday, November 12, 2008 9:05 AM
Subject: Re: [agi] Ethics of computer-based cognitive experimentation
One of the main conclusions of the paper I am writing now i
r the question about definitions, sure, it is true that the rules
are not cut in stone for how to do it. It's just that consciousness is
a rats nest of conflicting definitions
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive
ons.
The same is true of "consciousness".
Richard Loosemore
If you think it's about feelings/qualia
then - no - you don't need that [potentially dangerous] crap + we
don't know how to implement it anyway.
If you view it as high-level built-in response mechanism (which i
Matt Mahoney wrote:
--- On Tue, 11/11/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Matt Mahoney wrote:
--- On Tue, 11/11/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Your 'belief' explanation is a cop-out because it does not address
any of the issues that ne
John G. Rose wrote:
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
John LaMuth wrote:
Reality check ***
Consciousness is an emergent spectrum of subjectivity spanning 600
mill.
years of
evolution involving mega-trillions of competing organisms, probably
selecting
for obscure quantum
approach this - not even
in a million years.
An outwardly pragmatic language simulation, however, is very do-able.
John LaMuth
It is not.
And we can.
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
Matt Mahoney wrote:
--- On Tue, 11/11/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Your 'belief' explanation is a cop-out because it does not address
any of the issues that need to be addressed for something to count
as a definition or an explanation of the facts that need
Matt Mahoney wrote:
--- On Tue, 11/11/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Would a program be conscious if it passes the Turing
test? If not, what else is required?
No.
An understanding of what consciousness actually is, for
starters.
It is a belief.
No it is not.
An
k.
That is why Matt's "it is a belief" is not an explanation: it leaves so
many questions unanswered that it will never make it as a consensus
definition/explanation.
We will see. My paper on the subject is almost finished.
Richard Loosemore
If you only buy into the
Matt Mahoney wrote:
--- On Mon, 11/10/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Do you agree that there is no test to distinguish a
conscious human from a philosophical zombie, thus no way to
establish whether zombies exist?
Disagree.
What test would you use?
A sophist
Matt Mahoney wrote:
--- On Fri, 11/7/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
The question of whether a test is possible at all depends
on the fact that there is a coherent theory behind the idea
of consciousness.
Would you agree that consciousness is determined by a large
Matt Mahoney wrote:
--- On Wed, 11/5/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
In the future (perhaps the near future) it will be possible
to create systems that will have their own consciousness.
*Appear* to have consciousness, or do you have a test?
Yes.
But the test depe
I cannot speak for
anyone else, but that is my policy.
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.c
Bob Mottram wrote:
2008/11/5 Richard Loosemore <[EMAIL PROTECTED]>:
At the end of the
day, if you end up with some problems in the code because you transcribed it
wrong, how would you even begin to debug it?
Brains and digital computers are very different kinds of machinery.
If I w
ing
neuroscientists employed, but of little value otherwise.
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/membe
ipedia.org/wiki/Grand_Central_(technology)
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=86
;impossible'.
Sincerely,
Richard Loosemore
Hi all,
I have been thinking a bit about the nature of conversations on this list.
It seems to me there are two types of conversations here:
1)
Discussions of how to design or engineer AGI systems, using current
computers, according to designs th
te systems. "Our results
suggest that some of these laws probably cannot be derived from first
principles," he says.
END QUOTE.
I particularly liked his choice of words when he said: "We were able to
find a number of properties that were simply decoupl
ions in which we take a kind of Turing-esque, hands-off approach
and say that a goal is just what a "reasonably smart guy" would judge to
be a goal, but this kind of philosophical handwaving is not going to cut
the mustard if real systems need to be designed to
erson up when they respond?
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&
ere anyone would have used the word ... and even then, it
was not directed at the person I was talking to, but at an anonymous
group of people.
Richard Loosemore
On 8/3/08, Eric Burton <[EMAIL PROTECTED]> wrote:
David, in the spirit of scientific objectivity, I just did a search for
m not sure I understand this response at all.
Some serious accusations were made, so I checked them.
The accusations were derogatory, and they turned out to be unfounded.
You are saying that this is humorous?
Richard Loosemore
On Sun, Aug 3, 2008 at 7:12 PM, Richard Loosemore <[EM
against him.
On one occasion I quoted Ed Porter saying to me "Despite your statement
to the contrary --- despite your "FURY" --- I did get your point. Not
everybody beside Richard Loosemore is stupid." This was intented to be
a mild insult directed at me, although it is kind o
William Pearson wrote:
2008/8/3 Richard Loosemore <[EMAIL PROTECTED]>:
I probably don't need to labor the rest of the story, because you have heard
it before. If there is a brick wall between the overall behavior of the
system and the design choices that go into it - if it is im
be in the first few design decisions you make.
The whole show would be over long before you got to the details of
the design, so it would not matter how careful you were to keep the
subunits, connections and interfaces clean.
Let me know if this distinction makes sense. It
his is yet another example of you making off-the-cuff accusations
against me, which are a gross distortion of the truth, that, I believe,
you cannot substantiate.
If you are not able, or are too busy, to back up the allegation,
withdraw it.
Richard Loosemore
---
eone else's carefully
crafted argument as nothing more than "unsubstantiated intution", and
you add a few parting distortions of their argument, to boot.
It is THAT behavior that I feel compelled to object to, not the "let's
agree to disagree" behavio
n using an intuitive
> understanding
>
> * setting the parameters of the system via intelligently-guided trial
> and error
You have established no such thing! I am truly impressed by your nerve
though.
Give me some evidence! :-)
Hypothesi non fingo, remember
Richard Loosem
hat in the past, there has been nothing left for I
and the other sensible people on this list to do except shake our heads
and give up trying to explain anything to you.
Consult an outside expert, if you dare. You will get an unpleasant
surprise.
Richard Loosemore
Ed Porter wrot
Joel Pitt wrote:
On Sat, Aug 2, 2008 at 9:56 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
There is nothing quite so pathetic as someone who starts their comment with
a word like "Bull", and then proceeds to spout falsehoods.
Thus: in my paper there is a quote from a book i
David Hart wrote:
On 8/2/08, *Richard Loosemore* <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
Thus: in my paper there is a quote from a book in which Conway's
efforts were described, and it is transparently clear from this
quote that the method Conwa
David Hart wrote:
On 8/2/08, *Richard Loosemore* <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
Thus: in my paper there is a quote from a book in which Conway's
efforts were described, and it is transparently clear from this
quote that the method Conwa
Priceless! :-)
Just how far does someone have to go on this list - in the way of
sending gratuitous torrents of personal abuse - before the list
moderators at least rebuke them, if not ban them outright?
Richard Loosemore
Ed Porter wrote:
Richard Loosemore is at it again, acting
..
-- Ben
On Fri, Aug 1, 2008 at 3:51 PM, Linas Vepstas <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
2008/8/1 Richard Loosemore <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>>:
>important
> fact in this case is that if Conway had starte
ing that I have said.
Richard Loosemore
I understand that pursuing AGI designs based closely on the human
mind/brain has certain advantages ... but it also has certain obvious
disadvantages, such as intrinsically inefficient usage of the (very
nonbrainlike) compute resources at our dispos
12:16 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Borislav Iordanov wrote:
Richard,
No, this does not address the complex systems problem, because there is
a very specific challenge, or brick wall, that you do not mention, and
there is also a very specific recommendation, included
o me when i read
it in 1986/7, and that idea of relaxation is exactly what was behind the
descriptions that I gave, earlier in this thread, of systems that tried
to do recognition and question answering by constraint relaxation.
Richard Loosemore
---
Brad Paulsen wrote:
Richard Loosemore wrote:
Brad Paulsen wrote:
James,
Someone ventured the *opinion* that keeping such a list of "things I
don't know" was "nonsensical," but I have yet to see any evidence or
well-reasoned argument backing that opinion. So, it
Brad Paulsen wrote:
Richard Loosemore wrote:
Brad Paulsen wrote:
Richard Loosemore wrote:
Brad Paulsen wrote:
Richard Loosemore wrote:
Brad Paulsen wrote:
All,
Here's a question for you:
What does fomlepung mean?
If your immediate (mental) response was "I don't
ot merely an "opinion", it was a reasoned argument,
illustrated by an example of a nonword that clearly belonged to a vast
class of nonwords.
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed:
erish and claim that it is a theory of
everything, then you tell the world that it is the world's
responsibility to prove you wrong.
Incoherent gibberish cannot be proven wrong.
It is part of the very definition of "incoherent gibberish", that such
stuff cannot be proven wrong.
H
lexical level or the semantic level.
Valentina, it seems to me, was reacting to the humorous example I gave,
not mocking you personally.
Certainly, if you feel that I insulted you I am quite willing to
apologize for what (from my point of view) was an accident of prose style.
Richard
Brad Paulsen wrote:
Richard Loosemore wrote:
Brad Paulsen wrote:
Richard Loosemore wrote:
Brad Paulsen wrote:
All,
Here's a question for you:
What does fomlepung mean?
If your immediate (mental) response was "I don't know." it means
you're not a slang
do not know the answer to a question.
Richard Loosemore
Neural Correlates of Lexical Access During Visual Word Recognition,
Binder, J.R., McKiernan, K.A., Parsons, M.E. , Westbury, C.F., Possing,
E.T., Kaufman, J.N., Buchanan, L.J., Cogn. Neurosci..2003; 15: 372-393
People can discriminate re
sunderstanding of
your question, nor anyone being deliberately rude to you.
Richard Loosemore
Valentina Poletti wrote:
lol.. well said richard.
the stimuli simply invokes no signiticant response and thus our brain
concludes that we 'don't know'. that's why i
Brad Paulsen wrote:
Richard Loosemore wrote:
Brad Paulsen wrote:
All,
Here's a question for you:
What does fomlepung mean?
If your immediate (mental) response was "I don't know." it means
you're not a slang-slinging Norwegian. But, how did your brain
pro
est word-recognition neural nets that I
built and studied in the 1990s, activation of a nonword proceeded in a
very different way than activation of a word: it would have been easy
to build something to trigger a "this is a nonword" neuron.
Is there some type of AI formalism
;?
You do not show the slightest sign of understanding how to build an AGI
that behaves in a "friendly" way, or indeed in any other way. There is
no mechanism in your patent. All you have done is write some "Articles
of Good Behavior" that the AGI is s
Steve Richfield wrote:
Richard,
Good - you hit this one on its head! Continuing...
On 7/22/08, *Richard Loosemore* <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
Steve Richfield wrote:
THIS is a big question. Remembering that absolutely ANY function
Steve Richfield wrote:
Richard,
You are confusing what PCA now is, and what it might become. I am more
interested in the dream than in the present reality. Detailed comments
follow...
On 7/21/08, *Richard Loosemore* <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
much better things to come."
There is little reason to hope for better things to come (except for the
low level mechanisms that Derek quite correctly pointed out), because
the whole PCA idea is a dead end.
A dead end as a general AGI theory, mark you. It has its uses.
Richard Loosemor
Steve Richfield wrote:
Richard,
On 7/21/08, *Richard Loosemore* <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
Principal component analysis is not new, it has a long history,
Yes, as I have just discovered. What I do NOT understand is why anyone
bothers with cluste
look really impressive until
you realize how limited and non-extensible it is.
Richard Loosemore
Steve Richfield wrote:
Y'all,
I have long predicted a coming "Theory of Everything" (TOE) in CS that
would, among other things, be the "secret sauce" that AGI so desp
get on a kind of annual basis is a
press conference to show the world the first complete, fully functional
human level AGI system.
Haven't seen any of the latter recently, so we are probably due for one
pretty soon now.
Richard Loosemore
John LaMuth wrote:
Announcing the rec
Abram Demski wrote:
For what it is worth, I agree with Richard Loosemore in that your
first description was a bit ambiguous, and it sounded like you were
saying that backward chaining would add facts to the knowledge base,
which would be wrong. But you've cleared up the ambiguity.
I concu
course, just
stretching my humble example as far as I can.)
Note, again, that the temporal and level references in the rules are NOT
used by the BWC. They probably will be used by the part of the program
that does something with the BWC's output (the install(), goLevel(),
etc. functions). A
processes (at
least conceptually).
You are right that logic is as clear as mud outside the pristine
conceptual palace within which it was conceived, but if you're gonna
hang out inside the palace it is a bit of a shame to question its
elegance...
Richard Loosemore
-
guing. You can have the last say if you want. I
want to spend what time I have to spend on this list conversing with people
who are more concerned about truth than trying to sound like they know more
than others, particularly when they don't.
Anyone who reads this thread will know who was bei
ation to come to a completely incorrect conclusion
("...Thus I think the notion of what is forward and backward chaining
might be somewhat arbitrary...").
This last conclusion was sufficiently inaccurate that I decided to point
that out. It was not a criticism, just a clarificati
ff
(this one of the most common results).
The two procedures are quite fundamentally different.
Richard Loosemore
Furthermore, Shruiti, does not use multi-level compositional hierarchies for
many of its patterns, and it only uses generalizational hierarchies for slot
fillers, not for pattern
Ed Porter wrote:
Ed Porter wrote:
## RICHARD LOOSEMORE LAST EMAIL #>>
My preliminary response to your suggestion that other Shastri papers
do
describe ways to make binding happen correctly is as follows: anyone
can suggest ways that *might* cause correct binding to
oblems
in contemporary AI research except to cry foul. He does not even
consider such questions to be valid.
There is not much I can do in the face of such a deep misunderstanding
of the actual words I have written on the topic.
I think you are just venting, to
Ed Porter wrote:
## RICHARD LOOSEMORE LAST EMAIL #>>
My preliminary response to your suggestion that other Shastri papers do
describe ways to make binding happen correctly is as follows: anyone
can suggest ways that *might* cause correct binding to occur - anyone
ca
Ed Porter wrote:
## RICHARD LOOSEMORE WROTE #>>
Now I must repeat what I said before about some (perhaps many?) claimed
solutions to the binding problem: these claimed solutions often
establish the *mechanism* by which a connection could be established IF
THE TWO ITEM
ees are not possible, and in
practice the people who offer this style of explanation never do suply
the guarantees anyway, but just solve peripheral problems.
That is my view of the binding problem. It is a variant of the general
idea that things happen because of complexity (although tha
tion argument itself falls down. At least, the version of the
argument you have given here falls down.
Sure, the world might be a simulation, but this argument is not a
compelling reason to believe that the world is *probably* a simulation.
Richard Loosemore
Well, if you are a simulati
sounds like behaviorism, where connections between sensory
patterns (whatever those might be) and actions (whatever those might be)
were supposed to be mediated only by one level of connections.
Puzzled.
Richard Loosemore
---
agi
Archives: http://www.
case, you are calling it the binding problem getting the right
things to hook up together.
Problem is, you see, that getting the right things to hook up together
is the WHOLE STORY.
Richard Loosemore
Sincerely,
Ed Porter
References
1. Are Cortical Models Really Bou
in and
say "We reckon we can just use our smarts and figure out some heuristics
to get around it".
I'm just trying to get people to do a reality check.
Oh, and meanwhile (when I am not firing off occasional broadsides on
this list) I *am* worki
these mailing lists
continue to "not get it"; if you care why that is, this message is
only intended as a data point -- why *I* don't get it.
Hmmm. Interesting.
My goal is to spark debate on the topic.
I have always claimed that understanding the reason why there is a
problem
CSP is a real problem, you will notice something interesting: the
pattern of failure we have seen over the last fifty years in AI is
exactly what we would have expected if the CSP was indeed as real as I
think it is.
Richard Loosemore
---
you probably skimmed it a bit too quickly
and got the general conclusion but missed the detail. Unfortunately, I
think you then got the impression that there was not detail to be had.
But, all said and done, I write a more stylized version of it. Should
be ready by
.html>.
Interesting, thanks for that link!
Just loking at her list I am intrigued. I doesn't look like my list at
all. Nor the many other lists that I have seen.
I will read her paper.
Richard Loosemore
---
agi
Archives: http://www.listbox.co
seem to relate their generalizations to my case.
That is not to say that things will not converge, though. I should be
careful not to prejudge something so young.
Richard Loosemore
--- On Sun, 6/29/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
From: Richard Loosemore <[
t" the complex systems problem ... perhaps you are just confused
about what the argument actually is, and have been confused right from
the beginning?
Richard Loosemore
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed
Ed Porter wrote:
Richard,
Despite your statement to the contrary --- despite your "FURY" --- I did get
your point. Not everybody beside Richard Loosemore is stupid.
I understand there have been people making bold promises in AI for over 40
years, and most of them have been based
to justify a research program.
At the end of the day, I think that the *core* complex systems idea will
outlast all this other stuff, but it will become famous for its impact
on oter sciences, rather than for the specific theories of 'complexity'
that it generates.
We will see.
Ri
s!". It is almost comical to go back over the
various responses to the argument: not only do people go flying off in
all sorts of bizarre directions, but they also get quite strenuous about
it at the same time.
Not understanding an argument is not the same as the argument not bei
to
be a problem", or "Quit rocking the boat!", you can bet that nobody
really wants to ask any questions about whether the approaches are
correct, they just want to be left alone to get on with their
approaches. History, I think, will have some interesting t
, no destruction of the Computational Paradigm. It is just
a different way of looking at what 'algorithm' means, that's all.
Richard Loosemore.
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: ht
have been Simplexity (Kluger), but I am not sure.
Interestingly enough, Melanie Mitchell has a book due out in 2009 called
"The Core Ideas of the Sciences of Complexity". Interesting title,
given my thoughts in the last post.
Richard Loosemore
--
Jim Bromer wrote:
Richard Loosemore said:
With the greatest of respect, this is a topic that will require some
extensive background reading on your part, because the misunderstandings
in your above test are too deep for me to remedy in the scope of one or
two list postings. For example, my
te much to the core idea. And the core idea is not quite
enough for an entire book.
But, having said that, the core idea is so subtle and so easily
misunderstood that people trip over it without realizing its
significance. Hm.. maybe that means there really should be a
boo
age to get enough work done.
Richard Loosemore
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_se
Jim Bromer wrote:
From: Richard Loosemore Jim,
I'm sorry: I cannot make any sense of what you say here.
I don't think you are understanding the technicalities of the argument I
am presenting, because your very first sentence... "But we can invent a
'mathematics'
cent posts, I think this list is already dead.
Richard Loosemore
Ed Porter wrote:
WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI
On Wednesday, June 25, US East Cost time, I had an interesting phone
conversation with Dave Hart, where we discussed just how much hardware cou
ople can do. I do not quite understand how you came to this conclusion.
Richard Loosemore
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.lis
be able to
'predict' the occurence of intelligence based on local properties.
Remember the bottom line. My only goal is to ask how different
methodologies would fare if intelligence is complex.
Richard Loosemore
---
agi
Archives: http://ww
a million such experts (depending only on our
ability to physically build copies of the hardware).
These are just the obvious possibilities. Others could be listed, but
they are hardly necessary.
In this context, asking what AGIs are good for is a little comical.
Ri
Jim Bromer wrote:
- Original Message
From: Richard Loosemore Jim,
I'm sorry: I cannot make any sense of what you say here.
I don't think you are understanding the technicalities of the argument I
am presenting, because your very first sentence... "But we can invent
succeeded, and stop kidding themselves (as many, many AI researchers do)
that they are actually designing AI systems without regard to the human
design.
Okay, enough for now.
Richard Loosemore
---
agi
Archives: http://www.listbox.com/me
ly strong assertion, and unfortunately there is no evidence
(except the intuition of some people) that this is a valid assumption.
Quite the contrary, all the evidence appears to point the other way.
So that one statement is really the crunch point. All the rest is
downhill from that point on.
cal mechanisms. That is the very definition of a
complex system (note: this is a "complex system" in the technical sense
of that term, which does not mean a "complicated system" in ordinary
language).
Richard Loosemore
---
ag
thematics cannot possibly tell you that this part of the space does
not contain any solutions. That is the whole point of complex systems,
n'est pas? No analysis will let you know what the global properties are
without doing a brute force exploration of (simulations of) the system.
Ric
gnition baseline. If intelligence involves even a small
amount of complexity, it could well be that this is the only feasible
way to ever get an intelligence up and running.
Treat it, in other words, as a calculus of variations problem.
Richard Loosemore.
-
of onlookers.
I think that I need to write some more to explain the *way* that I see
this complex systems problem manifesting itself, because that aspect was
not emphasized (due to lack of space) and it leaves a certain amount of
confusion in the air. I will get to that when I can.
Richard
h we could
draw conclusions about the (possible) dangers of AGI.
Such an organization would be pointless. It is bad enough that SIAI is
50% community mouthpiece and 50% megaphone for Yudkowsky's ravings.
More mouthpieces we don't need.
Richard Loosemore
---
can be
formalized, then you immediately pre-empt the main question that
underlies all of this if scientific discovery is just a formal
(logico-deductive) process, then thinking is a formal process, and then
you have built in the assumption that intelligence is NOT a complex
system. Th
Abram
I am pressed for time right now, but just to let you know that, now that
I am aware of your post, I will reply soon. I think that many of your
concerns are a result of seeing a different message in the paper than
the one I intended.
Richard Loosemore
Abram Demski wrote:
To be
g the world out into categories, using almost
nothing but exemplar-based learning.
Just because I believe that there is much of value in cognitive science,
doesn't mean I will defend everything done in its name.
Richard Loosemore
---
agi
Arch
101 - 200 of 866 matches
Mail list logo