In a message dated 2/12/2003 10:49:23 PM Mountain Standard Time, [EMAIL PROTECTED] writes:
It fits almost perfectly. The other two parts of the definition work
very well when applied to him and his sheepish following.
Just unsub me from this list - I've had all of this I can stand, thanks.
Ben Goertzel wrote:
> you really test my tolerance as list moderator.
My appologies.
> Please, please, no personal insults. And no anti-Semitism or racism of
> any kind.
ACK.
> I guess that your reference to Eliezer as "the rabbi" may have been
> meant as amusing,
It is not at all amusing
Alan,
With comments like this
> I want this list to be useful to me and not have to skim through
> hundreds of e-mails watching the rabbi drive conversation into useless
> spirals as he works on the implementation details of the real problems.
> Really, I'm getting dizzy from all of this. Lets s
Cliff Stabbert wrote:
[On a side note, I'm curious whether and if so, how, lossy compression
might relate. It would seem that in a number of cases a simpler
algorithm than expresses exactly the behaviour could be valuable in
that it expresses 95% of the behaviour of the environment being
studied
Hi all ,
As
a digression from the recent threads on the Friendliness or otherwise of
certain uncomputable, unimplementable AI systems, I thought I'd post something
on some fascinating practical AI algorithms These are narrow-AI at
present, but, they definitely have some AGI
relevanc
> Jonathan Standley wrote:
> > Now here is my question, it's going to sound silly but there is
>> quite a bit behind it:
> > "Of what use is computronium to a superintelligence?"
> If the superintelligence perceives a need for vast computational
> resources, then computronium would indeed be very
> Now here is my question, it's going to sound
silly but there is quite a> bit behind it: > > "Of what use
is computronium to a superintelligence?" >
If the superintelligence perceives a need for vast computational resources,
then computronium would indeed be very useful. Assuming said SI
> Steve, Ben, do you have any gauge as too what kind of grants are hot
> right now or what kind of narrow AI projects with AGI implications have
> recently been funded through military agencies?
The list would be very long. Just look at the DARPA IPTO website for
starters...
http://www.darpa.mi
Daniel,
For a start look at the IPTO web page and links from:
http://www.darpa.mil/ipto/research/index.html
Darpa has a variety of Offices which sponsor AI related work, but IPTO is
now being run by Ron Brachman, the former president of the AAAI. When I
listened to the talk he gave a Cycorp in
>I believe that as evidence of AGI (e. g. software that can learn
>from reading) becomes widely known: (1) the military will provide
abundant
>funding - possibly in excess of what commercial firms could do without
a
>consortium (2) public outcry will assure that military AGI development
>has civi
Goertzel the good wrote:
> Perhaps living in Washington has made me a little paranoid, but I am
> continually aware of the increasing threats posed by technology to
> humanity's survival. I often think of humanity's near-term future as a
> race between destructive and constructive technologies.
Hi Eliezer,
> An intuitively fair, physically realizable challenge, with important
> real-world analogues, formalizable as a computation which can be fed
> either a tl-bounded uploaded human or an AIXI-tl, for which the human
> enjoys greater success measured strictly by total reward over time, du
Brad Wyble wrote:
>
> Under the ethical code you describe, the AGI would swat
> them like a bug with no more concern than you swatting a mosquito.
>
I did not describe an ethical code, I described two scenarios about a
human (myself) then suggested the non-bug-swatting scenario was
possible, analo
Eliezer S. Yudkowsky wrote:
Has the problem been thought up just in the sense of "What happens when
two AIXIs meet?" or in the formalizable sense of "Here's a computational
challenge C on which a tl-bounded human upload outperforms AIXI-tl?"
I don't know of anybody else considering "human uplo
This is slightly off-topic but no more so than the rest of the thread...
> 1) That it is selfishly pragmatic for a superintelligence to deal with
> humans economically rather than converting them to computronium.
For convenience, lets rephrase this
"the majority of arbitrarily generated s
Eliezer,
> A (selfish) human upload can engage in complex cooperative
> strategies with
> an exact (selfish) clone, and this ability is not accessible to AIXI-tl,
> since AIXI-tl itself is not tl-bounded and therefore cannot be simulated
> by AIXI-tl, nor does AIXI-tl have any means of abstractly
Shane Legg wrote:
Eliezer,
Yes, this is a clever argument. This problem with AIXI has been
thought up before but only appears, at least as far as I know, in
material that is currently unpublished. I don't know if anybody
has analysed the problem in detail as yet... but it certainly is
a very i
Alan Grimes wrote:
> You have not shown this at all. From everything you've said it seems
> that you are trying to trick Ben into having so many misgivings about
> his own work that he holds it up while you create your AI first. I hope
> Ben will see through this deception and press ahead with no
Eliezer,
Yes, this is a clever argument. This problem with AIXI has been
thought up before but only appears, at least as far as I know, in
material that is currently unpublished. I don't know if anybody
has analysed the problem in detail as yet... but it certainly is
a very interesting question
Ben Goertzel wrote:
>
>> Your intuitions say... I am trying to summarize my impression of your
>> viewpoint, please feel free to correct me... "AI morality is a
>> matter of experiential learning, not just for the AI, but for the
>> programmers. To teach an AI morality you must give it the right
>
As has been pointed out on this list before, the military IS interested in
AGI, and primarily for information integration rather than directly
weapons-related purposes.
See
http://www.darpa.mil/body/NewsItems/pdf/iptorelease.pdf
for example.
-- Ben G
>
> I can't imagine the military would b
> Hey, I'm a relatively new subscriber. I know that we are talking about
> how to make an AI system friendly but has anyone considered the opposite
> -- building AI weapons.
>
> In the US anyway, a significant portion of AI research funding comes
> from the military, and this current administr
On Wed, 12 Feb 2003, Brad Wyble wrote:
>
> I can't imagine the military would be interested in AGI, by its very
> definition. The military would want specialized AI's, constructed
> around a specific purpose and under their strict control. An AGI goes
> against everything the military wants from
- Original Message -
From:
Philip Sutton
To: [EMAIL PROTECTED]
Sent: Wednesday, February 12, 2003 2:55
PM
Subject: Re: [agi] AI Morality -- a
hopeless quest
Brad,
Maybe what you
said below is the key to friendly GAI
> I don't think any human a
>
> Brad Wyble wrote:
> >
> > Tell me this, have you ever killed an insect because it bothered you?
> >
>
> Well, of course. But there's bothering and BOTHERING. Let me give you
> an example: I will often squash a deer-fly or a mosquito because a) they
> hurt and b) they can infect me with dise
Daniel Colonnese wrote:
>
> Hey, I'm a relatively new subscriber. I know that we are talking
> about how to make an AI system friendly but has anyone considered the
> opposite -- building AI weapons.
>
Short answer: Yes.
Slightly longer answer: Most of the people I have discussed this with
are e
Brad Wyble wrote:
>
> Tell me this, have you ever killed an insect because it bothered you?
>
Well, of course. But there's bothering and BOTHERING. Let me give you
an example: I will often squash a deer-fly or a mosquito because a) they
hurt and b) they can infect me with disease. However I liv
Hi Daniel,
On Wed, 12 Feb 2003, Daniel Colonnese wrote:
> Bill Hibbard wrote:
>
> >We better not make them in our own image. We can make
> >them with whatever reinforcement values we like, rather
> >than the ones we humans were born with. Hence my often
> >repeated suggestion that they reinforce
>
> 1) matter manipulating machines of such a grand scale are not possible
> 2) mmm's are possible, but never actually do such a thing
> 3) mmm's are possible and they created this current universe as a simulation
> ala The Matrix.
>
As unlikely as it may be, we have to consider #4: that we're t
On Wed, 2003-02-12 at 06:46, Cliff Stabbert wrote:
> That is to say, the "the simplest explanation is right" heuristic
> tends to break down in the presence of life -- and the more so the
> more the life is conscious. Because among the things that it is
> conscious of is that it *can be* and *is*
Hello All..
After reading all this wonderful debate on AI morality and Eliezer's People
eating AGI concerns, I'm left wondering this: "Am I the *only* one here who
thinks that the *most* likely scenario is that such a thing as a "universe
devouring" AGI is utterly impossible?"
Everyone here seems
Arthur,
I disagree with your 'strong' dismissal of a viable AGI Morality and Ethics
System, leer an 'AGI MES'.
My reasoning is that, by definition, AGI systems must be developed in a
complex environment to solve complex problems, or risk complete irrelevance
all together. So, if we assume that t
> Brad,
>
> Maybe what you said below is the key to friendly GAI
>
> > I don't think any human alive has the moral and ethical underpinnings
> > to allow them to resist the corruption of absolute power in the long
> > run. We are all kept in check by our lack of power, the competition
> > of
Brad Wyble wrote:
>
> Tell me this, have you ever killed an insect because it bothered you?
"In other words, posthumanity doesn't change the goal posts. Being human
should still confer human rights, including the right not to be enslaved,
eaten, etc.. But perhaps being posthuman will confer post
Brad,
Maybe what you said below is the key
to friendly GAI
> I don't think
any human alive has the moral and ethical underpinnings
> to allow them
to resist the corruption of absolute power in the long
> run. We
are all kept in check by our lack of power, the competition
> of our fe
I can't imagine the military would be interested in AGI, by its very definition. The
military would want specialized AI's, constructed around a specific purpose and under
their strict control. An AGI goes against everything the military wants from its
weapons and agents. They train soldiers
Bill Hibbard wrote:
>We better not make them in our own image. We can make
>them with whatever reinforcement values we like, rather
>than the ones we humans were born with. Hence my often
>repeated suggestion that they reinforce behaviors
>according to human happiness.
Hey, I'm a relatively new s
On Wed, 12 Feb 2003, Arthur T. Murray wrote:
> The quest is as hopeless as it is with human children.
> Although Bill Hibbard singles out "the power of super-intelligence"
> as the reason why we ought to try to instill morality and friendliness
> in our AI offspring, such offspring are made in our
>
> I am exceedingly glad that I do not share your opinion on this. Human
> altruism *is* possible, and indeed I observe myself possessing a
> significant measure of it. Anyone doubting thier ability to 'resist
> corruption' should not IMO be working in AGI, but should be doing some
> serious in
On Wed, 12 Feb 2003, Michael Roy Ames wrote:
> Arthur T. Murray wrote:
> >
> > [snippage]
> > why should we creators of Strong AI have to take any
> > more precautions with our Moravecian "Mind Children"
> > than human parents do with their human babies?
> >
>
> Here are three reasons I can thin
Brad Wyble wrote:
> I don't think any human alive has the moral and ethical underpinnings
> to allow them to resist the corruption of absolute power in the long
> run.
>
I am exceedingly glad that I do not share your opinion on this. Human
altruism *is* possible, and indeed I observe myself posse
Alan Grimes wrote:
>
> You have not shown this at all. From everything you've said it seems
> that you are trying to trick Ben into having so many misgivings about
> his own work that he holds it up while you create your AI first. I
> hope Ben will see through this deception and press ahead with
>
Arthur T. Murray wrote:
>
> [snippage]
> why should we creators of Strong AI have to take any
> more precautions with our Moravecian "Mind Children"
> than human parents do with their human babies?
>
Here are three reasons I can think of, Arthur:
1) Because we know in advance that 'Strong AI', as
I don't think any human alive has the moral and ethical underpinnings to allow them to
resist the corruption of absolute power in the long run. We are all kept in check by
our lack of power, the competition of our fellow humans, the laws of society, and the
instructions of our peers. Remove a
Hi Arthur,
On Wed, 12 Feb 2003, Arthur T. Murray wrote:
> . . .
> Since the George and Barbara Bushes of this world
> are constantly releasing their little monsters onto the planet,
> why should we creators of Strong AI have to take any
> more precautions with our Moravecian "Mind Children"
> tha
Okay, let's see, I promised:
An intuitively fair, physically realizable challenge, with important
real-world analogues, formalizable as a computation which can be fed
either a tl-bounded uploaded human or an AIXI-tl, for which the human
enjoys greater success measured strictly by total reward o
Alois Schicklgruber and his wife Klara probably did not
give much thought to possible future aberrations when
"unser kleine Adi" was born to them on 20 April 1889.
"Our little Adolf" Hitler was probably cute and cuddly
like any other baby. No one could be expected to know
whether he would grow i
Alan,
> You have not shown this at all. From everything you've said it seems
> that you are trying to trick Ben into having so many misgivings about
> his own work that he holds it up while you create your AI first. I
> hope Ben will see through this deception and press ahead with
> novamente. --
Eliezer S. Yudkowsky wrote:
> 1) AI morality is an extremely deep and nonobvious challenge which has
> no significant probability of going right by accident.
> 2) If you get the deep theory wrong, there is a strong possibility of
> a silent catastrophic failure: the AI appears to be learning e
> So it's not the case that we intend to rely ENTIRELY on experiential
> learning; we intend to rely on experiential learning from an engineering
> initial condition, not from a complete tabula rasa.
>
> -- Ben G
"engineered" initial condition, I meant, oops
[typed in even more of a hurry as I ge
Hi,
> 2) If you get the deep theory wrong, there is a strong possibility of a
> silent catastrophic failure: the AI appears to be learning
> everything just
> fine, and both you and the AI are apparently making all kinds of
> fascinating discoveries about AI morality, and everything seems to be
> Your intuitions say... I am trying to summarize my impression of your
> viewpoint, please feel free to correct me... "AI morality is a matter of
> experiential learning, not just for the AI, but for the programmers.
Also, we plan to start Novamente off with some initial goals embodying
ethical
> I can spot the problem in AIXI because I have practice looking for silent
> failures, because I have an underlying theory that makes it immediately
> obvious which useful properties are formally missing from AIXI, and
> because I have a specific fleshed-out idea for how to create
> moral system
> Your intuitions say... I am trying to summarize my impression of your
> viewpoint, please feel free to correct me... "AI morality is a matter of
> experiential learning, not just for the AI, but for the programmers. To
> teach an AI morality you must give it the right feedback on moral
> quest
Eliezer,
Thanks for being clear at last about what the deep issue is that you
were driving at. Now I can start getting my head around what you are
trying to talk about.
Cheers, Philip
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http:
Shane Legg wrote:
Hi Cliff,
And if the "compressibility of the Universe" is an assumption, is
there a way we might want to clarify such an assumption, i.e., aren't
there numerical values that attach to the *likelihood* of gravity
suddenly reversing direction; numerical values attaching to the
li
Wednesday, February 12, 2003, 3:34:31 AM, Shane Legg wrote:
Shane, thanks for the explanation of Kolmogorov complexity and its
relation to the matter at hand.
[On a side note, I'm curious whether and if so, how, lossy compression
might relate. It would seem that in a number of cases a simpler
al
Hi Cliff,
> I should add, the example you gave is what raised my questions: it
> seems to me an essentially untrainable case because it presents a
> *non-repeatable* scenario.
>
> If I were to give to an AGI a 1,000-page book, and on the first 672
> pages was written the word "Not", it may predict
Eliezer,
I suppose my position is similar to Ben's in that I'm more worried
about working out the theory of AI than about morality because until
I have a reasonable idea of how an AI is going to actually work I
don't see how I can productively think about something as abstract
as AI morality.
I
Hi Cliff,
So "Solomonoff induction", whatever that precisely is, depends on a
somehow compressible universe. Do the AIXI theorems *prove* something
along those lines about our universe,
AIXI and related work does not prove that our universe is compressible.
Nor do they need to. The sun seems
Ben, you and I have a long-standing disagreement on a certain issue which
impacts the survival of all life on Earth. I know you're probably bored
with it by now, but I hope you can understand why, given my views, I keep
returning to it, and find a little tolerance for my doing so.
The issue is
61 matches
Mail list logo