Re: [agi] The Grounding of Maths

2007-10-12 Thread Eliezer S. Yudkowsky

Benjamin Goertzel wrote:


Well ... going beyond imaginary numbers...  how do *you* do mathematics 
in quaternionic and octonionic algebras?  Via visualization?  
Personally, I can sorta visualize 4D, but I I suck at visualizing 
8-dimensional space, so I tend to reason more abstractly when thinking 
about such things...


Just visualize it in N-dimensional space, then let N go to 8.

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=53003544-daa2f4


Re: Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Eliezer S. Yudkowsky

Tim Freeman wrote:


My point is that if one is worried about a self-improving Seed AI
exploding, one should also be worried about any AI that competently
writes software exploding.


There *is* a slight gap between competently writing software and 
competently writing minds.  Large by human standards, not much by 
interspecies standards.  It does involve new math issues, which is why 
some of us are much impressed by it.  Anyone with even a surface grasp 
of the basic concept on a math level will realize that there's no 
difference between self-modifying and writing an outside copy of 
yourself, but *either one* involves the sort of issues I've been 
calling reflective.


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=53161697-a947ab


Re: [agi] Nirvana? Manyana? Never!

2007-11-01 Thread Eliezer S. Yudkowsky

Jiri Jelinek wrote:


Let's go to an extreme: Imagine being an immortal idiot.. No matter
what you do  how hard you try, the others will be always so much
better in everything that you will eventually become totally
discouraged or even afraid to touch anything because it would just
always demonstrate your relative stupidity (/limitations) in some way.
What a life. Suddenly, there is this amazing pleasure machine as a new
god-like-style of living for poor creatures like you. What do you do?


Jiri,

Is this really what you *want*?

Out of all the infinite possibilities, this is the world in which you 
would most want to live?


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60221250-a74559


Re: [agi] Nirvana? Manyana? Never!

2007-11-01 Thread Eliezer S. Yudkowsky

Jiri Jelinek wrote:

Is this really what you *want*?
Out of all the infinite possibilities, this is the world in which you
would most want to live?


Yes, great feelings only (for as many people as possible) and the
engine being continuously improved by AGI which would also take care
of all related tasks including safety issues etc. The quality of our
life is in feelings. Or do we know anything better? We do what we do
for feelings and we alter them very indirectly. We can optimize and
get the greatest stuff allowed by the current design by direct
altering/stimulations (changes would be required so we can take it
non-stop). Whatever you enjoy, it's not really the thing you are
doing. It's the triggered feeling which can be obtained and
intensified more directly. We don't know exactly how those great
feelings (/qualia) work, but there is a number of chemicals and brain
regions known to play key roles.


I didn't ask whether it's possible.  I'm quite aware that it's 
possible.  I'm asking if this is what you want for yourself.  Not what 
you think that you ought to logically want, but what you really want.


Is this what you lived for?  Is this the most that Jiri Jelinek wants 
to be, wants to aspire to?  Forget, for the moment, what you think is 
possible - if you could have anything you wanted, is this the end you 
would wish for yourself, more than anything else?


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60231781-e47c04


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Eliezer S. Yudkowsky

Jiri Jelinek wrote:


Ok, seriously, what's the best possible future for mankind you can imagine?
In other words, where do we want our cool AGIs to get us? I mean
ultimately. What is it at the end as far as you can see?


That's a very personal question, don't you think?

Even the parts I'm willing to answer have long answers.  It doesn't 
involve my turning into a black box with no outputs, though.  Nor 
ceasing to act, nor ceasing to plan, nor ceasing to steer my own 
future through my own understanding of it.  Nor being kept as a pet. 
I'd sooner be transported into a randomly selected anime.


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60516560-38feaf


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Eliezer S. Yudkowsky

Jiri Jelinek wrote:

On Nov 2, 2007 4:54 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:


You turn it into a tautology by mistaking 'goals' in general for
'feelings'. Feelings form one, somewhat significant at this point,
part of our goal system. But intelligent part of goal system is much
more 'complex' thing and can also act as a goal in itself. You can say
that AGIs will be able to maximize satisfaction of intelligent part
too,


Could you please provide one specific example of a human goal which
isn't feeling-based?


Saving your daughter's life.  Most mothers would prefer to save their 
daughter's life than to feel that they saved their daughter's life. 
In proof of this, mothers sometimes sacrifice their lives to save 
their daughters and never get to feel the result.  Yes, this is 
rational, for there is no truth that destroys it.  And before you 
claim all those mothers were theists, there was an atheist police 
officer, signed up for cryonics, who ran into the World Trade Center 
and died on September 11th.  As Tyrone Pow once observed, for an 
atheist to sacrifice their life is a very profound gesture.


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60544283-64b657


Re: [agi] Questions

2007-11-05 Thread Eliezer S. Yudkowsky

Monika Krishan wrote:


2. Would it be a worthwhile exercise to explore what Human General 
Intelligence, in it's present state, is capable of ?


Nah.

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=61467350-a90b30


[agi] Re: What best evidence for fast AI?

2007-11-10 Thread Eliezer S. Yudkowsky

Robin Hanson wrote:
I've been invited to write an article for an upcoming special issue of 
IEEE Spectrum on Singularity, which in this context means rapid and 
large social change from human-level or higher artificial 
intelligence.   I may be among the most enthusiastic authors in that 
issue, but even I am somewhat skeptical.   Specifically, after ten years 
as an AI researcher, my inclination has been to see progress as very 
slow toward an explicitly-coded AI, and so to guess that the whole brain 
emulation approach would succeed first if, as it seems, that approach 
becomes feasible within the next century. 

But I want to try to make sure I've heard the best arguments on the 
other side, and my impression was that many people here expect more 
rapid AI progress.   So I am here to ask: where are the best analyses 
arguing the case for rapid (non-emulation) AI progress?   I am less 
interested in the arguments that convince you personally than arguments 
that can or should convince a wide academic audience.


All the replies on SL4 as of 10:40AM Pacific seem pretty good to me. 
Why are you asking after rapid progress?  It doesn't seem to be the 
key question.


Kahneman's Economic preferences or attitude expressions?  An analysis 
of dollar responses to public issues makes the point that in many 
cases, people have no anchors, no starting points, for questions like 
How much should this company be penalized for crime X? and so they 
substitute judgment of How bad was this company, on a scale of 1 to 
Y?, where the actual scale Y varies depending on the person, and then 
tack million dollars onto the end.


On one memorable occasion, an AI researcher said to me that he thought 
it would take 500 years before AGI.


500 years?  500 years ago we didn't even have *science*.

So what's going on?  I suspect that, especially among AI researchers, 
the question How long will it be before we get AGI?, is more of an 
attitude expression than a historical estimate - On a scale of 1 to 
Y, how hard is it to build AGI? - where Y varies from person to 
person, and then they tack on years at the end.  Naturally, building 
AGI will seem *very* hard if you can't imagine any way to do it (the 
imaginability heuristic) and so they'll give a response near the upper 
end of their scale.  The one responded as if I had asked, On a scale 
of 1 to 500, how hard does building AGI *feel*?


The key realization here is that building a flying machine would also 
*feel* very hard if you did not know how to do it.  But this reflects 
a knowledge gap, rather than solid knowledge of specific 
implementation difficulties.  We know how stars work, therefore we 
know it would be difficult to build a star from hydrogen atoms.  Some 
magazine or other, in 1903, said that future flying machines would be 
built by the work of millions of years(!) of mathematicians and 
mechanists.  They didn't know how to do it, and they confused this 
feeling of difficulty for the positive estimate that doing it *with* 
knowledge would be very difficult.


As for knowledge itself, that is a matter of pure basic research, and 
if we knew the outcome we wouldn't need to do the research.  How can 
you put a time estimate on blue-sky fundamental research delivering a 
brilliant new insight?  Far or near?


It's also possible that AI researchers are substituting judgment of 
How long would it take to create AGI *using the techniques you 
know*? in which case 500 years might well be an underestimate, if it 
could be done at all, like trying to carve Mount Rushmore using 
toothpicks.


Others may substitute judgment of How good do you feel about AI? and 
give a short time estimate, reflecting their general feelings of 
goodwill toward the field.


We have no reason to believe that timing is predictable even in 
principle - that it will be a narrow distribution over Everett 
branches - let alone that we can predict it in practice with knowledge 
presently available to us.


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63880401-d5511d


[agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Eliezer S. Yudkowsky

http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all

I guess the moral here is Stay away from attempts to hand-program a 
database of common-sense assertions.


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=87836028-f6311f


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-20 Thread Eliezer S. Yudkowsky

Joshua Fox wrote:

  Turing also committed suicide.
And Chislenko. Each of these people had different circumstances, and
suicide strikes everywhere, but I wonder if there is a common thread.


Ramanujan, like many other great mathematicians and achievers, died 
young. There are on the other hand many great mathematicians and 
achievers that lived to old age. I dare not say whether it is 
dangerous to be a genius without access to more complete statistics.

-- Kai-Mikael Jää-Aro
- http://www.nada.kth.se/~kai/lectures/geb.html

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=87869011-a6e042


<    1   2