discussion as possible. Let's hope that that happens on
the occasions that it is discussed, now and in the future.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]
in a 'rational/normative' AI system).
My new policy is to discuss issues only with people who can resist the
temptation to behave like this.
For that reason, Michael, you're now killfiled.
If anyone else wants to discuss the issues, feel free.
Richard Loosemore.
Richard Loosemore wrote
to actually use the stored information which is
presumably what a novice AI programmer would do.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]
a classic example: every single debate
or discussion of the consequences of the singularity, it seems, is
totally dominated by this kind of sloppy thinking.
Richard Loosemore
Matt Mahoney wrote:
I have raised the possibility that a SAI (including a provably friendly
one, if that's
this is a milestone of
mutual accord in a hitherto divided community.
Progress!
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]
Mitchell Porter wrote:
Richard Loosemore:
In fact, if it knew all about its own design (and it would,
eventually), it would check to see just how possible it might be for
it to accidentally convince itself to disobey its prime directive,
But it doesn't have a prime directive, does
that it is
just vague handwaving without specific questions designed to show that
the argument falls apart under probing. I don't see the argument
falling apart, so making that accusation again would be unjustified.
Richard Loosemore
Ben Goertzel wrote:
Hi,
There is something about the gist of your
solving. The fact
that this works in practice strongly suggests that the universe is
indeed a simulation.
It suggests nothing of the sort.
Hutter's theory is a mathematical fantasy with no relationship to the
real world.
Richard Loosemore.
-
This list is sponsored by AGIRI: http
Razor etc.) is irrelevant if you
or Hutter cannot prove something more than a hand-waving connection
between the mathematical idealizations of intelligence, learning,
etc., and the original meanings of those words.
So my original request stands unanswered.
Richard Loosemore.
P.S. The above
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
What I wanted was a set of non-circular definitions of such terms as
intelligence and learning, so that you could somehow *demonstrate*
that your mathematical idealization of these terms correspond with the
real thing, ... so
Ben Goertzel wrote:
Richard Loosemore wrote:
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
What I wanted was a set of non-circular definitions of such terms as
intelligence and learning, so that you could somehow
*demonstrate* that your mathematical idealization
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
What I wanted was a set of non-circular definitions of such terms as
intelligence and learning, so that you could somehow *demonstrate*
that your
to perform as well
as I do, because it redefines what I am trying to do in such a way as
to weaken my performance, and then proves that it can perform better
than *that*).
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your
one on call and ready to go when needed).
I only need the possibility that it will do this, and my conclusion holds.
So: clear question. Does the proof implicitly allow it?
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your
them to deliver one. Such a proof is completely
valueless.
AIXI is valueless.
QED.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983
visible from *here*. What about the stuff (possibly infinite
amounts of stuff) that lies beyond the curvature horizon?
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id
no sense to ask whether there would be
minds so advanced that 'we' could never understand them.
Or, to be precise, it is not at all obvious that such a situation will
ever exist.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your
The possibility has occurred to me. :-)
Colin Tate-Majcher wrote:
Heheh, how do you know you didn't want to know what it was like to live
in the 2000s and work toward the Singularity. Maybe we are already
super advanced and just got bored :)
-Colin
On 4/18/07, *Richard Loosemore
.
The full argument is much more detailed, of course, but that is the core
of it.
Oh, and: Shane is *not* the one who proved the correctness of my
assertion! I am not sure where you got that from. ;-)
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
, on a machine with only one thousandth of today's power.
And besides, solving the problem of understanding sentences could easily
be done in principle with even a vocabulary as small as 200 words.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
would not actually work. With a
motivational system as bad as that, it would never get to be an AGI in
the first place. Hence your assertion that humanity will be wiped out
by accident is completely untenable.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
Keith Elis wrote:
Richard Loosemore wrote:
Your email could be taken as threatening to set up a website
to promote
violence against AI researchers who speculate on ideas that, in your
judgment, could be considered scary.
I'm on your side, too, Richard.
I understand this, and I
mongers in Hollywood would *love* that SIAI-based group to get more
publicity, because they'd make money hand over fist if that happened.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com
to state your opinion and walk away? Discussion
involves the technical details. Anything less is meaningless.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id
refers to what would happen if such machines were
built: they would produce a flood of new discoveries on such an immense
scale that we would be jumped from our present technology to the
technology of the far future in a matter of a few years.
Hope that clarifies the situation.
Richard
-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Monday, October 22, 2007 11:15 AM
To: singularity@v2.listbox.com
Subject: Re: [singularity] QUESTION
albert medina wrote:
Dear Sirs,
I have a question to ask and I am not sure that I am sending
proofs.)
Hope that helps, but please ask questions if it does not.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=56650779-cde055
places.
I propose to you that Consciousness (encased within the brain) does not
know Itself, hence the lively quest and fascination for other
intelligence, such as AGI.
Sincerely,
Albert
*/Richard Loosemore [EMAIL PROTECTED]/* wrote:
[EMAIL PROTECTED] wrote:
Hello
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
This is nonsense: the result of giving way to science fiction fantasies
instead of thinking through the ACTUAL course of events. If the first
one is benign, the scenario below will be impossible, and if the first
one
the consequence might be more than you were
expecting them to be.
This is my vision of what a Bright Green Tomorrow could be like.
Let me know if you have questions.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options
with a Yeah, but what if everything goes wrong, huh? What if
Frankenstein turns up? Huh? Huh? comment.
Happens every time.
Richard Loosemore
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
snip post-singularity utopia
Let's assume for the moment that the very first AI
on in this discussion.
Richard Loosemore
Mike Tintner wrote:
Every speculation on this board about the nature of future AGI's has
been pure fantasy. Even those which try to dress themselves up in some
semblance of scientific reasoning. All this speculation, for example,
about the friendliness
candice schuster wrote:
Hi Richard,
Without getting too technical on you...how do you propose implementing
these ideas of yours ?
In what sense?
The point is that implementation would be done by the AGIs, after we
produce a blueprint for what we want.
Richard Loosemore
you, THAT would be fantasy.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57169853-e8d26b
are fun.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57204066-d80ce4
on Nanotechnology by Eric Drexler, or the huge
literature on space elevators, or the stuff on life extension.
Not fantasy, really.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id
status: I am
sure some people will choose not to take that option, and just stay as
they are).
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret
.)
Very concisely put: that is exactly the situation.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57704557-682977
.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57724858-1c339c
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Why do say that Our reign will end in a few decades when, in fact, one
of the most obvious things that would happen in this future is that
humans will be able to *choose* what intelligence level to be
experiencing, on a day
species. Not master and servant. Just one species
with more options than before.
[I can see I am going to have to write this out in more detail, just to
avoid the confusion caused by brief glimpses of the larger picture].
Richard Loosemore
Candice
Date: Thu, 25 Oct 2007 19:02:35
is: its initial feelings of friendliness toward humanity
would have to be the motivation that drove it to find out the CEV.
The goal state of its motivation system is assumed in the initial state
of its motivation system. Hence: circular.
Richard Loosemore
-
This list is sponsored
Stefan Pernar wrote:
On 10/26/07, *Richard Loosemore* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Stefan can correct me if I am wrong here, but I think that both yourself
and Aleksei have misunderstood the sense in which he is pointing to a
circularity.
If you build
up to their discoveries.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57872584-e89283
these last points, we agree.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57875471-fd31f5
Charles D Hixson wrote:
Richard Loosemore wrote:
candice schuster wrote:
Richard,
Your responses to me seem to go in round abouts. No insult intended
however.
You say the AI will in fact reach full consciousness. How on earth
would that ever be possible ?
I think I recently (last
at different
levels here, and using these terms in ways that cross over rather
weirdly. I speak only of two different types of mechanism, but that
does not quite map onto your usage. I will have to think about this
some more.
Richard Loosemore
-
This list is sponsored by AGIRI: http
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Suppose that the collective memories of all the humans make up only one
billionth of your total memory, like one second of memory out of your
human
lifetime. Would it make much difference if it was erased
pronouncements about Mentifex may be sincere, but his estimates of
its capabilities are somewhat ... exaggerated.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id
of them are fools, and therefore NONE of
their counter-arguments are valid.
Really. I like Jaron Lanier as a musician, but this is drivel.
Richard Loosemore
---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http
Stathis Papaioannou wrote:
On 17/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:
The first problem arises from Lanier's trick of claiming that there is a
computer, in the universe of all possible computers, that has a machine
architecture and a machine state that is isomorphic to BOTH
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
When people like Lanier allow themselves the luxury of positing
infinitely large computers (who else do we know who does this? Ah, yes,
the AIXI folks), they can make infinitely unlikely coincidences happen.
It is a commonly
Stathis Papaioannou wrote:
On 18/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:
[snip]
But again, none of this touches upon Lanier's attempt to draw a bogus
conclusion from his thought experiment.
No external observer would ever be able to keep track of such a
fragmented computation
for what consciousness is, which
starts out from a resolution of the definition-difficulty.
I note that Nick Humphrey has recently started to say something very
similar.
Richard Loosemore
---
singularity
Archives: http://www.listbox.com/member/archive/11983
is Getting Zapped.
Richard Loosemore
---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=4007604id_secret
Stathis Papaioannou wrote:
On 19/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:
Sorry, but I do not think your conclusion even remotely follows from the
premises.
But beyond that, the basic reason that this line of argument is
nonsensical is that Lanier's thought experiment was rigged
Stathis Papaioannou wrote:
On 20/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:
I am aware of some of those other sources for the idea: nevertheless,
they are all nonsense for the same reason. I especially single out
Searle: his writings on this subject are virtually worthless. I have
, the other must also be
understanding. (Searle's main folly, of course, is that he has never
shown any sign of being able to understand this point).
Richard Loosemore
---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed
floating point numbers because the behavior of the
net deteriorated badly if the numerical precision was reduced. This was
especially important on long training runs or large datasets.
Richard Loosemore
---
singularity
Archives: http://www.listbox.com
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- John G. Rose [EMAIL PROTECTED] wrote:
Is there really a bit per synapse? Is representing a synapse with a bit
an
accurate enough simulation? One synapse is a very complicated system.
A typical
[EMAIL PROTECTED] wrote:
You have to be careful with the phrase 'Manhattan-style project'.
You are right.
On previous occasions when this subject has come up I, at least, have
referred to the idea as an Apollo Project, not a Manhattan Project.
Richard Loosemore
That was a military
...
:-)
Richard Loosemore
---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered
the field in Dead
Stop mode.
Richard Loosemore
---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription:
http://www.listbox.com/member/?member_id
J. Andrew Rogers wrote:
On Apr 6, 2008, at 8:55 AM, Richard Loosemore wrote:
What could be compelling about a project? (Novamente or any other).
Artificial Intelligence is not a field that rests on a firm
theoretical basis, because there is no science that says this design
should produce
J. Andrew Rogers wrote:
On Apr 6, 2008, at 11:58 AM, Richard Loosemore wrote:
Artificial Intelligence research does not have a credible science
behind it. There is no clear definition of what intelligence is,
there is only the living example of the human mind that tells us that
some things
true path to AGI ... I
strongly suspect there are many...
Actually, the discussion had nothing to do with the rather bizarre
interpretation you put on it above.
Richard Loosemore
---
singularity
Archives: http://www.listbox.com/member/archive/11983
, so you cannot demand that
the person produce evidence to support the nonexistence claim. The onus
is entirely on you to provide evidence that there is a science behind
AI, if you believe that there is, not on me to demonstrate that there is
none.
Richard Loosemore
, because
the conflict resolution issues are all complexity-governed.
I am astonished that you would so blatantly call it something that it is
not.
Richard Loosemore
---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Perhaps you have not read my proposal at
http://www.mattmahoney.net/agi.html
or don't understand it.
Some of us have read it, and it has nothing whatsoever to do with
Artificial Intelligence
that
Google will somehow reach a threshold and (magically) become
intelligent. Why would that happen?
If they deliberately set out to build an AGI somewhere, and then hook
that up to google, that is a different matter entirely. But that is not
what is being suggested here.
Richard Loosemore
Derek Zahn wrote:
Richard Loosemore:
I am not sure I understand.
There is every reason to think that a currently-envisionable AGI would
be millions of times smarter than all of humanity put together.
Simply build a human-level AGI, then get it to bootstrap to a level of,
say
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Just what do you want out of AGI? Something that thinks like a person or
something that does what you ask it to?
Either will do: your suggestion achieves neither.
If I ask your non-AGI the following
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
When a computer processes a request like how many teaspoons in a cubic
parsec? it can extract the meaning of the question by a relatively
simple set of syntactic rules and question templates.
But when you ask it a question
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
If you have a better plan for AGI, please let me know.
I do. I did already.
You are welcome to ask questions about it at any time (see
http://susaro.com/publications).
Question: which of these papers
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
I did also look at http://susaro.com/archives/category/general but there
is no
design here either, just a list of unfounded assertions. Perhaps you can
explain why you believe point #6 in particular
, but your posts are sounding more and more like incoherent rants.
Richard Loosemore
---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription:
http
have already
read has not been published!
Are there no depths to which you will not stoop?
Richard Loosemore
---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983
get an advanced feeling that such
a work is on the way are the people on the front lines, you see all the
pieces coming together just before they are assembled for public
consumption.
Whether or not someone could write down tests of progress ahead of that
point, I do not know.
Richard
club for people
dedicated to spineless Yudkowsky-worship.
Richard Loosemore
---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription:
http
Thomas McCabe wrote:
On 4/18/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Such a discussion list would be just another exclusive club for people
dedicated to spineless Yudkowsky-worship.
Richard Loosemore
Eli's not a member of fai-logistics, and I don't think he even knows
about it yet
Thomas McCabe wrote:
On 4/18/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Thomas McCabe wrote:
On 4/18/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Such a discussion list would be just another exclusive club for people
dedicated to spineless Yudkowsky-worship.
Richard Loosemore
Thomas McCabe wrote:
On 4/18/08, Richard Loosemore [EMAIL PROTECTED] wrote:
You repeatedly insinuate, in your comments above, that the idea is not
taken seriously by anyone, in spite of the fact I have already made it quite
clear that this is false.
The burden of proof is on you to show
be necessary to
PRESUPPOSE the answer to the question that is driving these
considerations about scientific theories.
Richard Loosemore
Thomas McCabe wrote:
On Thu, Apr 24, 2008 at 3:16 AM, Samantha Atkins [EMAIL PROTECTED] wrote:
Thomas McCabe wrote:
Does NASA have a coherent
is, and what
its explanation is (you will have to wait for my book to come out before
you see why I would be so confident), so if you are anxious that a
future AI should have consciousness, I believe this can easily be arranged.
Richard Loosemore
Bertromavich Edenburg wrote:
For Virtual AI
I have just written a new blog post that is the begining of a daily
series this week and next, when I will be launching a few broadsides
against the orthodoxy and explaining where I am going with my work.
http://susaro.com/
Richard Loosemore
of the ideas I have
written about elsewhere.
Richard Loosemore
---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription:
http://www.listbox.com/member
will not be as demading as the last (a few hundred
words instead of 4,200).
Richard Loosemore
---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription:
http
I have stuck my neck out and written an Open Letter to AGI (Artificial
General Intelligence) Investors on my website at http://susaro.com.
All part of a campaign to get this field jumpstarted.
Next week I am going to put up a road map for my own development project.
Richard Loosemore
88 matches
Mail list logo