Alan,
With comments like this
> I want this list to be useful to me and not have to skim through
> hundreds of e-mails watching the rabbi drive conversation into useless
> spirals as he works on the implementation details of the real problems.
> Really, I'm getting dizzy from all of this. Lets s
> Jonathan Standley wrote:
> > Now here is my question, it's going to sound silly but there is
>> quite a bit behind it:
> > "Of what use is computronium to a superintelligence?"
> If the superintelligence perceives a need for vast computational
> resources, then computronium would indeed be very
> Now here is my question, it's going to sound
silly but there is quite a> bit behind it: > > "Of what use
is computronium to a superintelligence?" >
If the superintelligence perceives a need for vast computational resources,
then computronium would indeed be very useful. Assuming said SI
Goertzel the good wrote:
> Perhaps living in Washington has made me a little paranoid, but I am
> continually aware of the increasing threats posed by technology to
> humanity's survival. I often think of humanity's near-term future as a
> race between destructive and constructive technologies.
This is slightly off-topic but no more so than the rest of the thread...
> 1) That it is selfishly pragmatic for a superintelligence to deal with
> humans economically rather than converting them to computronium.
For convenience, lets rephrase this
"the majority of arbitrarily generated s
Alan Grimes wrote:
> You have not shown this at all. From everything you've said it seems
> that you are trying to trick Ben into having so many misgivings about
> his own work that he holds it up while you create your AI first. I hope
> Ben will see through this deception and press ahead with no
Ben Goertzel wrote:
>
>> Your intuitions say... I am trying to summarize my impression of your
>> viewpoint, please feel free to correct me... "AI morality is a
>> matter of experiential learning, not just for the AI, but for the
>> programmers. To teach an AI morality you must give it the right
>
Alan Grimes wrote:
>
> You have not shown this at all. From everything you've said it seems
> that you are trying to trick Ben into having so many misgivings about
> his own work that he holds it up while you create your AI first. I
> hope Ben will see through this deception and press ahead with
>
Alan,
> You have not shown this at all. From everything you've said it seems
> that you are trying to trick Ben into having so many misgivings about
> his own work that he holds it up while you create your AI first. I
> hope Ben will see through this deception and press ahead with
> novamente. --
Eliezer S. Yudkowsky wrote:
> 1) AI morality is an extremely deep and nonobvious challenge which has
> no significant probability of going right by accident.
> 2) If you get the deep theory wrong, there is a strong possibility of
> a silent catastrophic failure: the AI appears to be learning e
> So it's not the case that we intend to rely ENTIRELY on experiential
> learning; we intend to rely on experiential learning from an engineering
> initial condition, not from a complete tabula rasa.
>
> -- Ben G
"engineered" initial condition, I meant, oops
[typed in even more of a hurry as I ge
Hi,
> 2) If you get the deep theory wrong, there is a strong possibility of a
> silent catastrophic failure: the AI appears to be learning
> everything just
> fine, and both you and the AI are apparently making all kinds of
> fascinating discoveries about AI morality, and everything seems to be
> Your intuitions say... I am trying to summarize my impression of your
> viewpoint, please feel free to correct me... "AI morality is a matter of
> experiential learning, not just for the AI, but for the programmers.
Also, we plan to start Novamente off with some initial goals embodying
ethical
> I can spot the problem in AIXI because I have practice looking for silent
> failures, because I have an underlying theory that makes it immediately
> obvious which useful properties are formally missing from AIXI, and
> because I have a specific fleshed-out idea for how to create
> moral system
> Your intuitions say... I am trying to summarize my impression of your
> viewpoint, please feel free to correct me... "AI morality is a matter of
> experiential learning, not just for the AI, but for the programmers. To
> teach an AI morality you must give it the right feedback on moral
> quest
Eliezer,
Thanks for being clear at last about what the deep issue is that you
were driving at. Now I can start getting my head around what you are
trying to talk about.
Cheers, Philip
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http:
Eliezer,
I suppose my position is similar to Ben's in that I'm more worried
about working out the theory of AI than about morality because until
I have a reasonable idea of how an AI is going to actually work I
don't see how I can productively think about something as abstract
as AI morality.
I
17 matches
Mail list logo