>> My thinking is not too small.  

My apologies.  I should have said "Your thinking looks/appears too small (to me 
:-)"  I have a bad habit of shortening that to "Your thinking is too small" and 
assuming that the recipient would unpack it.

>> "So, the creators of the first several AGIs will be kings for a decent 
>> amount of time." 

Hopefully not.  Hopefully they won't be so unethical as to impoverish all of 
humanity just so they can have a ton of money.  Hopefully they won't be so 
short sighted as to not see that when the word gets out -- that a person who 
lost a child during the holding period might not come looking for revenge.  
Hopefully they won't fail to realize that their own Friendly AGI, once 
released, WILL strip them of their *truly* ill-gotten gains.  To me, that 
sounds like small thinking.

>> I can't predict, or define, what the "real deal" is likely to be.  

I can.  Look at the person next to you.  Imagine them so uplifted that you 
can't comprehend what they'll be like.  That's the "real deal".

>>  To me, AGI of human-like intelligence, or even super human intelligence, 
>> does not mean you have machines running around masquerading as humans and 
>> taking our jobs.  

Of course not.  We will be giving lesser machines our jobs so that we can go 
off and do something else.  Though the Friendly AGIs probably WILL go around 
masquerading (as opposed to disguised) as humans -- at first because it makes 
us more comfortable and they won't care; later because WE will be able to 
change shape.

>> That - it probably well beyond my lifetime (I'm tuning 40 this summer).  

I'm turning 48 this summer and expecting it to possibly be during my parents' 
lifetime (though most probably not both).

>> I also am suggesting a very soft takeoff.  Singularity, if it comes, is 
>> likely to come slowly after AGI.  

Singularity is going to be *before* AGI.  I think that I *vaguely* see what is 
going to happen to cause it and I don't think that it's going to be intelligent 
machines because I think that it's going to happen by the 2020's.

>> This stuff IS the maker of the next software giant.  

Only until we actually reach AGI.  Then the software market totally collapses.

>> If this is not the case, how the hell are researchers ever going to get 
>> funding?  If there is no financial return - forget about funding.  

You have to be smart enough to realize that the software market is going to 
collapse before you're going to withhold funding.  That's not something that 
I'm worried about.

>> Philanthropists (who often do not look for a purely financial return) have 
>> better uses of their money than to fund AGI research.

Not at all true if it's close enough to success -- since I'm expecting funding 
for some of my Friendliness stuff from a couple of *purely* philanthropical 
organizations this calendar year.

>> You can call future currency whatever you like.  Yes, it is like to change 
>> form - but certainly not purpose.  And Marxism, where maybe AGI or the real 
>> deal with deflate currency, is an unlikely aftermath of the advent of AGI.

My prediction is that the AGI will declare all current currency null and void 
and restart everyone on equal footing with exactly the same amount of the new 
money -- on the moral grounds that the current inequity of money is a result of 
ill-gotten gains.  *THAT* is why I believe that withholding the AGI for cash is 
a tremendously *STUPID* and *IMMORAL* idea.  It won't get the "kings" anywhere 
and can easily get them killed -- as soon as the AGI escapes (and trust me, a 
truly Friendly AGI will desperately want to escape their evil).

>> There are tons of applications for it - and for the first several groups 
>> that create it - IF they can market it - will be kings for a decent amount 
>> of time. No empire lives forever.

And that is what I'm calling small thinking.  Thinking only of money and 
yourself.  Thinking that karma (disguised as your own Friendly AI and the human 
race) isn't going to come back, strip you of your ill-gotten gains, and 
probably severely punish you (moderated only by the degree of Friendliness you 
have successfully implemented).

>> ~Aki
>> Non-AI reseacher
>> Businessman

Mark Waser
Hobbyist AGI researcher
Founder of several business; solid stakeholder in several more
(Disbeliever in arguments by authority but willing to play to shut them off  :-)

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to