Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread j.k.

On 10/15/2008 08:01 AM,, Ben Goertzel wrote:

...

It seems to me there are two types of conversations here:

1)
Discussions of how to design or engineer AGI systems, using current 
computers, according to designs that can feasibly be implemented by 
moderately-sized groups of people


2)
Discussions about whether the above is even possible -- or whether it 
is impossible because of weird physics, or poorly-defined special 
characteristics of human creativity, or the so-called complex systems 
problem, or because AGI intrinsically requires billions of people and 
quadrillions of dollars, or whatever


...

Potentially, there could be another list, something like 
agi-philosophy, devoted to philosophical and weird-physics and other 
discussions about whether AGI is possible or not.  I am not sure 
whether I feel like running that other list ... and even if I ran it, 
I might not bother to read it very often.  I'm interested in new, 
substantial ideas related to the in-principle possibility of AGI, but 
not interested at all in endless philosophical arguments over various 
peoples' intuitions in this regard.


One fear I have is that people who are actually interested in building 
AGI, could be scared away from this list because of the large volume 
of anti-AGI philosophical discussion.   Which, I add, almost never has 
any new content, and mainly just repeats well-known anti-AGI arguments 
(Penrose-like physics arguments ... mind is too complex to engineer, 
it has to be evolved ... no one has built an AGI yet therefore it 
will never be done ... etc.)


What are your thoughts on this?




Another emphatic +1 on this idea. Having both types of discussion on the 
same list invariably results in type 2 discussions drowning out type 1 
discussions, as has happened on this list more and more in recent 
months. A lower volume list that is more tightly focused on type 1 
topics would be much appreciated.


I may still subscribe to the other list, but being able to filter the 
two lists into separate mail folders (which would be prioritized and 
read or skimmed or skipped accordingly) would save me a lot of time.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-03 Thread j.k.

On 09/03/2008 05:52 PM, Terren Suydam wrote:

I'm talking about a situation where humans must interact with the FAI without 
knowledge in advance about whether it is Friendly or not. Is there a test we 
can devise to make certain that it is?


   


This seems extremely unlikely. Consider that any set of interactions you 
have with a machine you deem friendly could have been with a genuinely 
friendly machine or with an unfriendly machine running an emulation of a 
friendly machine in an internal sandbox, with the unfriendly machine 
acting as man in the middle.


If you have only ever interacted with party B, how could you determine 
if party B is relaying your questions to party C and returning party C's 
responses to you or interacting with you directly -- given that all 
real-world solutions like timing responses against expected response 
times and trying to check for outgoing messages are not possible? Unless 
you understood party B's programming perfectly and had absolute control 
over its operation, you could not. And if you understood its programming 
that well, you wouldn't have to interact with it to determine if it is 
friendly or not.


joseph


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread j.k.

On 08/29/2008 10:09 AM, Abram Demski wrote:

I like that argument.

Also, it is clear that humans can invent better algorithms to do
specialized things. Even if an AGI couldn't think up better versions
of itself, it would be able to do the equivalent of equipping itself
with fancy calculators.

--Abram

   


Exactly. A better transistor or a lower complexity algorithm for a 
computational bottleneck in an AGI (and implementing such) is a 
self-improvement that improves the AGI's ability to make further 
improvements -- i.e., RSI.


Likewise, it is not inconceivable that we will soon be able to improve 
human intelligence by means such as increasing neural signaling speed 
(assuming the increase doesn't have too many negative effects, which it 
might) and improving other *individual* aspects of brain biology. This 
would be RSI, too.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread j.k.

On 08/29/2008 01:29 PM, William Pearson wrote:

2008/8/29 j.k.[EMAIL PROTECTED]:
   

An AGI with an intelligence the equivalent of a 99.-percentile human
might be creatable, recognizable and testable by a human (or group of
humans) of comparable intelligence. That same AGI at some later point in
time, doing nothing differently except running 31 million times faster, will
accomplish one genius-year of work every second.
 


Will it? It might be starved for lack of interaction with the world
and other intelligences, and so be a lot less productive than
something working at normal speeds.

   


Yes, you're right. It doesn't follow that its productivity will 
necessarily scale linearly, but the larger point I was trying to make 
was that it would be much faster and that being much faster would 
represent an improvement that improves its ability to make future 
improvements.


The numbers are unimportant, but I'd argue that even if there were just 
one such human-level AGI running 1 million times normal speed and even 
if it did require regular interaction just like most humans do, that it 
would still be hugely productive and would represent a phase-shift in 
intelligence in terms of what it accomplishes. Solving one difficult 
problem is probably not highly parallelizable in general (many are not 
at all parallelizable), but solving tens of thousands of such problems 
across many domains over the course of a year or so probably is. The 
human-level AGI running a million times faster could simultaneously 
interact with tens of thousands of scientists at their pace, so there is 
no reason to believe it need be starved for interaction to the point 
that its productivity would be limited to near human levels of 
productivity.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread j.k.

On 08/29/2008 03:14 PM, William Pearson wrote:

2008/8/29 j.k.[EMAIL PROTECTED]:

... The human-level AGI running a million
times faster could simultaneously interact with tens of thousands of
scientists at their pace, so there is no reason to believe it need be
starved for interaction to the point that its productivity would be limited
to near human levels of productivity.

 

Only if it had millions of times normal human storage capacity and
memory bandwidth, else it couldn't keep track of all the
conversations, and sufficient bandwidth for ten thousand VOIP calls at
once.
   
And sufficient electricity, etc. There are many other details that would 
have to be spelled out if we were trying to give an exhaustive list of 
every possible requirement. But the point remains that *if* the 
technological advances that we expect to occur actually do occur, then 
there will be greater-than-human intelligence that was created by 
human-level intelligence -- unless one thinks that memory capacity, chip 
design and throughput, disk, system, and network bandwidth, etc., are 
close to as good as they'll ever get. On the contrary, there are more 
promising new technologies on the horizon than one can keep track of 
(not to mention current technologies that can still be improved), which 
makes it extremely unlikely that any of these or the other relevant 
factors are close to practical maximums.

We should perhaps clarify what you mean by speed here? The speed of
the transistor is not all of what makes a system useful. It is worth
noting that processor speed hasn't gone up appreciably from the heady
days of Pentium 4s with 3.8 GHZ in 2005.

Improvements have come from other directions (better memory bandwidth,
better pipelines and multi cores).
I didn't believe that we could drop a 3 THz chip (if that were 
physically possible) onto an existing motherboard and it would scale 
linearly or that a better transistor would be the *only* improvement 
that occurs. When I said 31 million times faster, I meant the system 
as a whole would be 31 million times faster at achieving its 
computational goals. This will obviously require many improvements in 
processor design, system architecture, memory, bandwidth, physics  
materials sciences, and others, but the scenario I was trying to discuss 
was one in which these sorts of things will have occurred.


This is getting quite far off topic from the point I was trying to make 
originally, so I'll bow out of this discussion now.


j.k.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-28 Thread j.k.

On 08/28/2008 04:47 PM, Matt Mahoney wrote:

The premise is that if humans can create agents with above human intelligence, 
then so can they. What I am questioning is whether agents at any intelligence 
level can do this. I don't believe that agents at any level can recognize 
higher intelligence, and therefore cannot test their creations.


The premise is not necessary to arrive at greater than human 
intelligence. If a human can create an agent of equal intelligence, it 
will rapidly become more intelligent (in practical terms) if advances in 
computing technologies continue to occur.


An AGI with an intelligence the equivalent of a 99.-percentile human 
might be creatable, recognizable and testable by a human (or group of 
humans) of comparable intelligence. That same AGI at some later point in 
time, doing nothing differently except running 31 million times faster, 
will accomplish one genius-year of work every second. I would argue that 
by any sensible definition of intelligence, we would have a 
greater-than-human intelligence that was not created by a being of 
lesser intelligence.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] Paper rec: Complex Systems: Network Thinking

2008-06-29 Thread j.k.
While searching for information about the Mitchell book to be published 
in 2009 
http://www.amazon.com/Core-Ideas-Sciences-Complexity/dp/0195124413/, 
which was mentioned in passing by somebody in the last few days, I found 
a paper by the same author that I enjoyed reading and that will probably 
be of interest to others on this list.


The paper is entitled Complex systems: Network thinking 
http://web.cecs.pdx.edu/%7Emm/AIJ2006.pdf, and it was published in 
_Artificial Intelligence_ in 2006. I'd guess that sections 6 and 7 may 
be the starting point for the 2009 book. Section 6 explains three 
natural complex systems: the immune system, foraging and task allocation 
in ant colonies, and cellular metabolism. Section 7 abstracts four 
fundamental principles that Mitchell argues are common to the three 
natural complex systems described and to intelligence,  self-awareness, 
and self-control in other decentralized systems.


The four principles are:

1. Global information is encoded as statistics and dynamics of patterns 
over the system's components.

2. Randomness and probabilities are essential.
3. The system carries out a fine-grained, parallel search of possibilities.
4. The system exhibits a continual interplay of bottom-up and top-down 
processes.


See the paper for some elaboration of each of the principles and more 
information. It's available at 
http://web.cecs.pdx.edu/~mm/publications.html.







---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Did this message get completely lost?

2008-06-02 Thread j.k.

On 06/01/2008 09:29 PM,, John G. Rose wrote:

From: j.k. [mailto:[EMAIL PROTECTED]
On 06/01/2008 03:42 PM, John G. Rose wrote:


A rock is conscious.
  

Okay, I'll bite. How are rocks conscious under Josh's definition or any
other non-LSD-tripping-or-batshit-crazy definition?




The way you phrase your question indicates your knuckle-dragging predisposition 
making it difficult to responsibly expend an effort in attempt to satisfy your 
- piqued inquisitive biting action.

  
Yes, my tone was overly harsh, and I apologize for that. It was more 
indicative of my frustration with the common practice on this list of 
spouting nonsense like rocks are conscious *without explaining what is 
meant* by such an ostensibly ludicrous statement or *giving any kind of 
a justification whatsoever*. This sort of intellectual sloppiness 
seriously lowers the quality of the list and makes it difficult to find 
the occasionally really insightful content.





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Did this message get completely lost?

2008-06-01 Thread j.k.

On 06/01/2008 03:42 PM, John G. Rose wrote:

A rock is conscious.


Okay, I'll bite. How are rocks conscious under Josh's definition or any 
other non-LSD-tripping-or-batshit-crazy definition?



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Recap/Summary/Thesis Statement

2008-03-09 Thread j.k.

On 03/09/2008 10:20 AM,, Mark Waser wrote:
My claim is that my view is something better/closer to the true CEV 
of humanity.




Why do you believe it likely that Eliezer's CEV of humanity would not 
recognize your approach is better and replace CEV1 with your improved 
CEV2, if it is actually better?



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Recap/Summary/Thesis Statement

2008-03-09 Thread j.k.

On 03/09/2008 02:43 PM, Mark Waser wrote:
Why do you believe it likely that Eliezer's CEV of humanity would not 
recognize your approach is better and replace CEV1 with your improved 
CEV2, if it is actually better?


If it immediately found my approach, I would like to think that it 
would do so (recognize that it is better and replace Eliezer's CEV 
with mine).


Unfortunately, it is doesn't immediately find/evaluate my approach, it 
might traverse some *really* bad territory while searching (with the 
main problem being that I perceive the proportionality attractor as 
being on the uphill side of the revenge attractor and Eliezer's 
initial CEV as being downhill of all that).


It *might* get stuck in bad territory, but can you make an argument why 
there is a *significant* chance of that happening? Given that humanity 
has many times expanded the set of 'friendlies deserving friendly 
behavior', it seems an obvious candidate for further research. And of 
course, those smarter, better, more ... ones will be in a better 
position than us to determine that.


One thing that I think most of will agree on is that if things did work 
as Eliezer intended, things certainly could go very wrong if it turns 
out that the vast majority of people --  when smarter, more the people 
they wish they could be, as if they grew up more together ... -- are 
extremely unfriendly in approximately the same way (so that their 
extrapolated volition is coherent and may be acted upon). Our 
meanderings through state space would then head into very undesirable 
territory. (This is the people turn out to be evil and screw it all up 
scenario.) Your approach suffers from a similar weakness though, since 
it would suffer under the seeming friendly people turn out to be evil 
and screw it all up before there are non-human intelligent friendlies to 
save us scenario.



Which, if either, of 'including all of humanity' rather than just 
'friendly humanity', or 'excluding non-human friendlies (initially)' do 
you see as the greater risk? Or is there some other aspect of Eliezer's 
approach that especially concerns you and motivates your alternative 
approach?


Thanks for continuing to answer my barrage of questions.

joseph

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Recap/Summary/Thesis Statement

2008-03-08 Thread j.k.

On 03/07/2008 05:28 AM, Mark Waser wrote:

*/Attractor Theory of Friendliness/*
 
There exists a describable, reachable, stable attractor in state space 
that is sufficiently Friendly to reduce the risks of AGI to acceptable 
levels


I've just carefully reread Eliezer's CEV 
http://www.singinst.org/upload/CEV.html, and I believe your basic idea 
is realizable in Eliezer's envisioned system.


For example, if including all Friendly beings in the CEV seems 
preferable to our extrapolated smarter, better ... selves, then a system 
implementing Eliezer's approach (if working as intended) would certainly 
renormalize and take into account the CEV of non-humans. And if our 
smarter, better ... selves do not think it preferable, I'd be inclined 
to trust their judgment, assuming that the previous tests and 
confirmations that are envisioned had occurred.


The CEV of humanity is only the initial dynamic, and is *intended* to be 
replaced with something better.


joseph

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-07 Thread j.k.

On 03/07/2008 08:09 AM,, Mark Waser wrote:

There is one unique attractor in state space.


No.  I am not claiming that there is one unique attractor.  I am 
merely saying that there is one describable, reachable, stable 
attractor that has the characteristics that we want.  There are 
*clearly* other attractors. For starters, my attractor requires 
sufficient intelligence to recognize it's benefits.  There is 
certainly another very powerful attractor for simpler, brute force 
approaches (which frequently have long-term disastrous consequences 
that aren't seen or are ignored).




Of course. An earlier version said there is one unique attractor that 
identify friendliness here, and while editing it somehow ended up in 
that obviously wrong form.


Since any sufficiently advanced species will eventually be drawn 
towards F, the CEV of all species is F.


While I believe this to be true, I am not convinced that it is 
necessary for my argument.  I think that it would make my argument a 
lot easier if I could prove it to be true -- but I currently don't see 
a way to do that.  Anyone want to chime in here?


Ah, okay. I thought you were going to argue this following on from 
Omohundro's paper about drives common to all sufficiently advanced AIs 
and extend it to all sufficiently advanced intelligences, but that's my 
hallucination.




Therefore F is not species-specific, and has nothing to do with any 
particular species or the characteristics of the first species that 
develops an AGI (AI).


I believe that the F that I am proposing is not species-specific.  My 
problem is that there may be another attractor F' existing somewhere 
far off in state space that some other species might start out close 
enough to that it would be pulled into that attractor instead.  In 
that case, there would be the question as to how the species in the 
two different attractors interact.  My belief is that it would be to 
the mutual benefit of both but I am not able to prove that at this time.




For there to be another attractor F', it would of necessity have to be 
an attractor that is not desirable to us, since you said there is only 
one stable attractor for us that has the desired characteristics. I 
don't see how beings subject to these two different attractors would 
find mutual benefit in general, since if they did, then F' would have 
the desirable characteristics that we wish a stable attractor to have, 
but it doesn't.


This means that genuine conflict between friendly species or between 
friendly individuals is not even possible, so there is no question of 
an AI needing to arbitrate between the conflicting interests of two 
friendly individuals or groups of individuals. Of course, there will 
still be conflicts between non-friendlies, and the AI may arbitrate 
and/or intervene.


Wherever/whenever there is a shortage of resources (i.e. not all goals 
can be satisfied), goals will conflict.  Friendliness describes the 
behavior that should result when such conflicts arise.  Friendly 
entities should not need arbitration or intervention but should 
welcome help in determining the optimal solution (which is *close* to 
arbitration but subtly different in that it is not adverserial).  I 
would rephrase your general point as a true, adverserial relationship 
is not even possible.


That's a better way of putting it. Conflict will be possible, but 
they'll always be resolved via exchange of information rather than bullets.


The AI will not be empathetic towards homo sapiens sapiens in 
particular. It will be empathetic towards f-beings (friendly beings 
in the technical sense), whether they exist or not (since the AI 
might be the only being anywhere near the attractor).


Yes.  It will also be empathic towards beings with the potential to 
become f-beings because f-beings are a tremendous resource/benefit.


You've said elsewhere that the constraints on how it deals with 
non-friendlies are rather minimal, so while it might be 
empathic/empathetc, it will still have no qualms about kicking ass and 
inflicting pain where necessary.




This means no specific acts of the AI towards any species or 
individuals are ruled out, since it might be part of their CEV (which 
is the CEV of all beings),  even though they are not smart enough to 
realize it.


Absolutely correct and dead wrong at the same time.  You could invent 
specific incredibly low-probabaility but possible circumstances where 
*any* specific act is justified.  I'm afraid that my vision of 
Friendliness certainly does permit the intentional destruction of the 
human race if that is the *only* way to preserve a hundred more 
intelligent, more advanced, more populous races.  On the other hand, 
given the circumstance space that we are likely to occupy with a huge 
certainty, the intentional destruction of the human race is most 
certainly ruled out.  Or, in other words, there are no infinite 
guarantees but we can reduce the dangers to infinitessimally 

Re: [agi] What should we do to be prepared?

2008-03-07 Thread j.k.

On 03/07/2008 03:20 PM,, Mark Waser wrote:

 For there to be another attractor F', it would of necessity have to be
 an attractor that is not desirable to us, since you said there is only
 one stable attractor for us that has the desired characteristics.
 
Uh, no.  I am not claiming that there is */ONLY/* one unique attractor 
(that has the desired characteristics).  I am merely saying that there 
is */AT LEAST/* one describable, reachable, stable attractor that has 
the characteristics that we want.  (Note:  I've clarified a previous 
statement my adding the */ONLY/* and */AT LEAST /*and the 
parenthetical expression that has the desired characteristics.)


Okay, got it now. At least one, not exactly one.

I really don't like the particular quantifier rather minimal.  I 
would argue (and will later attempt to prove) that the constraints are 
still actually as close to Friendly as rationally possible because 
that is the most rational way to move non-Friendlies to a Friendly 
status (which is a major Friendliness goal that I'll be getting to 
shortly).  The Friendly will indeed have no qualms about kicking ass 
and inflicting pain */where necessary/* but the where necessary 
clause is critically important since a Friendly shouldn't resort to 
this (even for Unfriendlies) until it is truly necessary.


Fair enough. rather minimal is much too strong a phrase.
 
 I think you're fudging a bit here. If we are only likely to occupy the

 circumstance space with probability less than 1, then the intentional
 destruction of the human race is not 'most certainly ruled out': it is
 with very high probability less than 1 ruled out. I'm not trying to say
 it's likely; only that's it's possible. */I make this point to 
distinguish

 your approach from other approaches that purport to make absolute
 guarantees about certain things (as in some ethical systems where
 certain things are *always* wrong, regardless of context or 
circumstance)./*
 
Um.  I think that we're in violent agreement.  I'm not quite sure 
where you think I'm fudging.


The reason I thought you were fudging was that I thought you were saying 
that it is absolutely certain that the AI will never turn the planet 
into computronium and upload us *AND* there are no absolute guarantees. 
I guess I was misled when I read given the circumstance space that we 
are likely to occupy with a huge certainty, the intentional destruction 
of the human race is most certainly ruled out as meaning 'turning earth 
into computronium is certainly ruled out'. It's only certainly ruled out 
*assuming* the highly likely area of circumstance space that we are 
likely to inhabit. So yeah, I guess we do agree.


This raises another point for me though. In another post (2008-03-06 
14:36) you said:


It would *NOT* be Friendly if I have a goal that I not be turned into 
computronium even if your clause (which I hereby state that I do)


Yet, if I understand our recent exchange correctly, it is possible for 
this to occur and be a Friendly action regardless of what sub-goals I 
may or may have. (It's just extremely unlikely given ..., which is an 
important distinction.) It would be nice to have some ballpark 
probability estimates though to know what we mean by extremely unlikely. 
10E-6 is a very different beast than 10E-1000.



 
 I don't think it's inflammatory or a case of garbage in to contemplate
 that all of humanity could be wrong. For much of our history, there 
have

 been things that *every single human was wrong about*. This is merely
 the assertion that we can't make guarantees about what vastly superior
 f-beings will find to be the case. We may one day outgrow our 
attachment

 to meatspace, and we may be wrong in our belief that everything
 essential can be preserved in meatspace, but we might not be at that
 point yet when the AI has to make the decision.
 
Why would the AI *have* to make the decision?  It shouldn't be for 
it's own convenience.  The only circumstance that I could think of 
where the AI should make such a decision *for us* over our 
objections is if we would be destroyed otherwise (but there was no way 
for it to convince us of this fact before the destruction was inevitable).
It might not *have* to. I'm only saying it's possible. And it would 
almost certainly be for some circumstance that has not occurred to us, 
so I can't give you a specific scenario. Not being able to find such a 
scenario is different though from there not actually being one. In order 
to believe the later, a proof is required.
 
 Yes, when you talk about Friendliness as that distant attractor, it

 starts to sound an awful lot like enlightenment, where self-interest
 is one aspect of that enlightenment, and friendly behavior is another
 aspect.
 
Argh!  I would argue that Friendliness is *not* that distant.  Can't 
you see how the attractor that I'm describing is both self-interest 
and Friendly because **ultimately they are the same thing**  (OK, so 
maybe that *IS* 

Re: [agi] What should we do to be prepared?

2008-03-06 Thread j.k.

On 03/06/2008 08:32 AM,, Matt Mahoney wrote:

--- Mark Waser [EMAIL PROTECTED] wrote:
  

And thus, we get back to a specific answer to jk's second question.  *US*
should be assumed to apply to any sufficiently intelligent goal-driven
intelligence.  We don't need to define *us* because I DECLARE that it
should be assumed to include current day humanity and all of our potential
descendants (specifically *including* our Friendly AIs and any/all other
mind children and even hybrids).  If we discover alien intelligences, it
should apply to them as well.



... snip ...

- Killing a dog to save a human life is friendly because a human is more
intelligent than a dog.

... snip ...
  


Mark said that the objects of concern for the AI are any sufficiently 
intelligent goal-driven intelligence[s], but did not say if or how 
different levels of intelligence would be weighted differently by the 
AI. So it doesn't yet seem to imply that killing a certain number of 
dogs to save a human is friendly.


Mark, how do you intend to handle the friendliness obligations of the AI 
towards vastly different levels of intelligence (above the threshold, of 
course)?


joseph


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread j.k.

On 03/05/2008 05:04 PM,, Mark Waser wrote:
And thus, we get back to a specific answer to jk's second question.  
*US* should be assumed to apply to any sufficiently intelligent 
goal-driven intelligence.  We don't need to define *us* because I 
DECLARE that it should be assumed to include current day humanity and 
all of our potential descendants (specifically *including* our 
Friendly AIs and any/all other mind children and even hybrids).  If 
we discover alien intelligences, it should apply to them as well.
 
I contend that Eli's vision of Friendly AI is specifically wrong 
because it does *NOT* include our Friendly AIs in *us*.  In later 
e-mails, I will show how this intentional, explicit lack of inclusion 
is provably Unfriendly on the part of humans and a direct obstacle to 
achieving a Friendly attractor space.
 
 
TAKE-AWAY:  All goal-driven intelligences have drives that will be the 
tools that will allow us to create a self-correcting Friendly/CEV 
attractor space.
 


I like the expansion of CEV from 'human being' (or humanity) to 
'sufficiently intelligent being' (all intelligent beings). It is obvious 
in retrospect (isn't it always?), but didn't occur to me when reading 
Eliezer's CEV notes. It seems related to the way in which 'humanity' has 
become broader as a term (once applied to certain privileged people 
only) and 'beings deserving of certain rights' has become broader and 
broader (pointless harm of some animals is no longer condoned [in some 
cultures]).


I wonder if this is a substantive difference with Eliezer's position 
though, since one might argue that 'humanity' means 'the [sufficiently 
intelligent and sufficiently ...] thinking being' rather than 'homo 
sapiens sapiens', and the former would of course include SAIs and 
intelligent alien beings.


joseph

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread j.k.
At the risk of oversimplifying or misinterpreting your position, here 
are some thoughts that I think follow from what I understand of your 
position so far. But I may be wildly mistaken. Please correct my mistakes.


There is one unique attractor in state space. Any individual of a 
species that develops in a certain way -- which is to say, finds itself 
in a certain region of the state space -- will thereafter necessarily be 
drawn to the attractor if it acts in its own self interest. This 
attractor is friendliness (F). [The attractor needs to be sufficiently 
distant from present humanity in state space that our general 
unfriendliness and frequent hostility towards F is explainable and 
plausible. And it needs to be sufficiently powerful that coming under 
its influence given time is plausible or perhaps likely.]


Since any sufficiently advanced species will eventually be drawn towards 
F, the CEV of all species is F. Therefore F is not species-specific, and 
has nothing to do with any particular species or the characteristics of 
the first species that develops an AGI (AI). This means that genuine 
conflict between friendly species or between friendly individuals is not 
even possible, so there is no question of an AI needing to arbitrate 
between the conflicting interests of two friendly individuals or groups 
of individuals. Of course, there will still be conflicts between 
non-friendlies, and the AI may arbitrate and/or intervene.


The AI will not be empathetic towards homo sapiens sapiens in 
particular. It will be empathetic towards f-beings (friendly beings in 
the technical sense), whether they exist or not (since the AI might be 
the only being anywhere near the attractor). This means no specific acts 
of the AI towards any species or individuals are ruled out, since it 
might be part of their CEV (which is the CEV of all beings),  even 
though they are not smart enough to realize it.


Since the AI empathizes not with humanity but with f-beings in general, 
it is possible (likely) that some of humanity's most fundamental beliefs 
may be wrong from the perspective of an f-being. Without getting into 
the debate of the merits of virtual-space versus meat-space and 
uploading, etc., it seems to follow that *if* the view that everything 
of importance is preserved (no arguments about this, it is an assumption 
for the sake of this point only) in virtual-space and *if* turning the 
Earth into computronium and uploading humanity and all of Earth's beings 
would be vastly more efficient a use of the planet, *then* the AI should 
do this (perhaps would be morally obligated to do this) -- even if every 
human being pleads for this not to occur. The AI would have judged that 
if we were only smarter, faster, more the kind of people we would like 
to be, etc., we would actually prefer the computronium scenario.


You might argue that from the perspective of F, this would not be 
desirable because ..., but we are so far from F in state space that we 
really don't know which would be preferable from that perspective (even 
if we actually had  detailed knowledge about the computronium scenario 
and its limitations/capabilities to replace our wild speculations). It 
might be the case that property rights, say, would preclude any f-being 
from considering the computronium scenario preferable, but we don't know 
that, and we can't know that with certainty at present. Likewise, our 
analysis of the sub-goals of friendly beings might be incorrect, which 
would make it unlikely that our analysis of what a friendly being will 
actually believe is mistaken.


It's become apparent to me in thinking about this that 'friendliness' is 
really not a good term for the attitude of an f-being that we are 
talking about: that of acting solely in the interest of f-beings 
(whether others exist or not) and in consistency with the CEV of all 
sufficiently ... beings. It is really just acting rationally (according 
to a system that we do not understand and may vehemently disagree with).


One thing I am still unclear about is the extent to which the AI is 
morally obligated to intervene to prevent harm. For example, if the AI 
judged that the inner life of a cow is rich enough to deserve protection 
and that human beings can easily survive without beef, would it be 
morally obligated to intervene and prevent the killing of cows for food? 
If it would not be morally obligated, how do you propose to distinguish 
between that case (assuming it makes the judgments it does but isn't 
obligated to intervene) and another case where it makes the same 
judgments and is morally obligated to intervene (assuming it would be 
required to intervene in some cases).


Thoughts?? Apologies for this rather long and rambling post.

joseph

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

Re: [agi] What should we do to be prepared?

2008-03-06 Thread j.k.

On 03/06/2008 02:18 PM,, Mark Waser wrote:
I wonder if this is a substantive difference with Eliezer's position 
though, since one might argue that 'humanity' means 'the 
[sufficiently intelligent and sufficiently ...] thinking being' 
rather than 'homo sapiens sapiens', and the former would of course 
include SAIs and intelligent alien beings.


Eli is quite clear that AGI's must act in a Friendly fashion but we 
can't expect humans to do so.  To me, this is foolish since the 
attractor you can create if humans are Friendly tremendously increases 
our survival probability.




The point I was making was not so much about who is obligated to act 
friendly but whose CEV is taken into account. You are saying all 
sufficiently ... beings, while Eliezer says humanity. But does Eliezer 
say 'humanity' because that humanity is *us* and we care about the CEV 
of our species (and its sub-species and descendants...) or 'humanity' 
because we are the only sufficiently ... beings that we are presently 
aware of (and so humanity would include any other sufficiently ... being 
that we eventually discover).


It just occurred to me though that it doesn't really matter whether it 
is the CEV of homo sapiens sapiens or the CEV of some alien race or that 
of AIs, since you are arguing that they are the same, since there's 
nowhere to go beyond a point except towards the attractor.


joseph

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-05 Thread j.k.

On 03/05/2008 12:36 PM,, Mark Waser wrote:

snip...

The obvious initial starting point is to explicitly recognize that the 
point of Friendliness is that we wish to prevent the extinction of the 
*human race* and/or to prevent many other horrible nasty things that 
would make *us* unhappy.  After all, this is why we believe 
Friendliness is so important.  Unfortunately, the problem with this 
starting point is that it biases the search for Friendliness in a 
direction towards a specific type of Unfriendliness.  In particular, 
in a later e-mail, I will show that several prominent features of 
Eliezer Yudkowski's vision of Friendliness are actually distinctly 
Unfriendly and will directly lead to a system/situation that is less 
safe for humans.


One of the critically important advantages of my proposed 
definition/vision of Friendliness is that it is an attractor in state 
space.  If a system finds itself outside (but necessarily 
somewhat/reasonably close) to an optimally Friendly state -- it will 
actually DESIRE to reach or return to that state (and yes, I *know* 
that I'm going to have to prove that contention).  While Eli's vision 
of Friendliness is certainly stable (i.e. the system won't 
intentionally become unfriendly), there is no force or desire 
helping it to return to Friendliness if it deviates somehow due to an 
error or outside influence.  I believe that this is a *serious* 
shortcoming in his vision of the extrapolation of the collective 
volition (and yes, this does mean that I believe both that 
Friendliness is CEV and that I, personally, (and shortly, we 
collectively) can define a stable path to an attractor CEV that is 
provably sufficient and arguably optimal and which should hold up 
under all future evolution.


TAKE-AWAY:  Friendliness is (and needs to be) an attractor CEV

PART 2 will describe how to create an attractor CEV and make it more 
obvious why you want such a thing.



!! Let the flames begin !!:-)


1. How will the AI determine what is in the set of horrible nasty 
thing[s] that would make *us* unhappy? I guess this is related to how 
you will define the attractor precisely.


2. Preventing the extinction of the human race is pretty clear today, 
but *human race* will become increasingly fuzzy and hard to define, as 
will *extinction* when there are more options for existence than 
existence as meat. In the long term, how will the AI decide who is 
*us* in the above quote?


Thanks,

jk

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Applicable to Cyc, NARS, ATM others?

2008-02-14 Thread j.k.
On 02/14/2008 06:32 AM, Mike Tintner wrote:
 The Semantic Web, Syllogism, and Worldview
 First published November 7, 2003 on the Networks, Economics, and
 Culture mailing list.
 Clay Shirky
 


For an alternate perspective and critique of Shirky's rant, see Paul
Ford's A Response to Clay Shirky's 'The Semantic Web, Syllogism, and
Worldview', available at http://www.ftrain.com/ContraShirky.html.

-jk

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI and Deity

2007-12-20 Thread j.k.
 'what might turn out to be the case', like 'if pigs could fly,
...'. If the latter, then ignore everything I've said.

-j.k.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78280299-9bd5b2


Re: [agi] AGI and Deity

2007-12-20 Thread j.k.
Hi Stan,

On 12/20/2007 07:44 PM,, Stan Nilsen wrote:

 I understand that it's all uphill to defy the obvious.  For the
record, today I do believe that intelligence way beyond human
intelligence is not possible.

I understand that this is your belief. I was trying to challenge you to
make a strong case that it is in fact *likely* to be true (rather than
just merely possible that it's true), which I do not believe you have
done. I think you mostly just stated what you would like to be the case
-- or what you intuit to be the case (there is rarely much of a
difference) -- and then talked of the consequences that might follow
*if* it were the case.

I'm still a little unsure what exactly you mean when you say
intelligence 'way beyond' human intelligence is not possible'.

Take my example of an intelligence that could in seconds recreate all
known mathematics, and also all the untaken paths that mathematicians
could have gone down but didn't (*yet*). It seems to me you have one of
two responses to this scenario: (1) you might assert that this it could
never happen because it is not possible (please elaborate if so); or (2)
you might believe that it is possible and could happen, but that it
would not qualify as 'way beyond' human intelligence (please elaborate
if so). Which is it? Or is there another alternative?

 For the moment, do I say anything new with the following example?  I
believe it contains the essence of my argument about intelligence.

 A simple example:
  Problem: find the optimal speed limit of a specific highway.

 Who is able to judge what the optimal is? 

Optimality is always relative to some criteria. Until the criteria are
fixed, any answer is akin to answering what is the optimal quuz of
fah? No answer is correct because no answer is wrong -- or all are
right or all wrong.

 In this case, would a simpleton have as good an answer? 

It depends on the criteria. For some criteria, a simpleton has
sufficient ability to answer optimally. For example, if the optimal
limit is defined in terms of its closeness to 42 MPH, we can all
determine the optimal speed limit.

 Perhaps the simple says, the limit is how fast you want to go. 

And that is certainly the optimal solution according to some criteria.
Just as certainly, it is absolutely wrong according to other criteria
(e.g., minimization of accidents). As long the criteria are unspecified,
there can of course be disagreement.

 The 100,000 strong intellect may gyrate through many deep thoughts and
come back with 47.8 miles per hour as the best speed limit to
establish.  Wouldn't it be interesting to see how this number was
derived?  And, better still, would another 100K rated intellect come up
with exactly the same number? If given more time, would the 100K rated
intellects eventually agree?
 My belief is that they will not agree.  This is life, the thing we model.

Reality *is* messy, and supreme intellects might come to different
answers based on different criteria for optimality, but that isn't an
argument that there can be no phase transition in intelligence or that
greater intelligence is not useful for many questions and problems.

Is the point of the question to suggest that because you think that
question might not benefit from greater intelligence, that you believe
most questions will not benefit from greater intelligence? Even if that
were the case, it would have no bearing at all on whether greater
intelligence is possible, only whether it is desirable. You seem to be
arguing that it's not possible, not that it's possible but pointless.

And I would argue that if super-intelligence were good for nothing other
than trivialities like abolishing natural death, developing ubiquitous
near-free energy technologies, designing ships to the stars, etc., it
would still be worthwhile. Do you think that greater intelligence is of
no benefit in achieving these ends?

 Lastly, why would you point to William James Sidis as a great
intelligence.  If anything, his life appears to support my case - that
is, he was brilliant as a youth but didn't manage any better in life
than the average man.  Could it be because life doesn't play better when
deep thinking is applied?

I used Sidis as an example of great intelligence because he was a person
of great intelligence, regardless of anything else he may have been.
Granted, we didn't get to see what he could have become or what great
discoveries he might have had in him, but it certainly wasn't because he
lacked intelligence. For the record, I believe his later life was
primarily determined by the circus freakshow character of his early life
and the relentlessness with which the media (and the minds they served)
tore him down and tried to humiliate him. It doesn't really matter
though, as the particular example is irrelevant, and von Neumann serves
the purpose just fine.

-joseph

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:

Re: How an AGI would be [WAS Re: [agi] AGI and Deity]

2007-12-20 Thread j.k.
On 12/20/2007 07:56 PM,, Richard Loosemore wrote:

 I think these are some of the most sensible comments I have heard on
this list for a while.  You are not saying anything revolutionary, but
it sure is nice to hear someone holding out for common sense for a change!

 Basically your point is that even if we just build an extremely fast
version of a human mind, that would have astonishing repercussions.

Thanks. I agree that even if it could do nothing that humans cannot, it
would have astonishing capabilities if it were just much faster. Von
Neumann is an especially good example. He was not in the same class of
creative genius as an Einstein or a Newton, but he was probably faster
than the two of them combined, and perhaps still faster if you add in
the rest of Einstein's IAS buddies as well. Pólya tells the following
story: There was a seminar for advanced students in Zürich that I was
teaching and von Neumann was in the class. I came to a certain theorem,
and I said it is not proved and it may be difficult. Von Neumann didn't
say anything but after five minutes he raised his hand. When I called on
him he went to the blackboard and proceeded to write down the proof.
After that I was afraid of von Neumann (How to Solve It, xv).

Most of the things he is known for he did in collaboration. What you
hear again and again that was unusual about his mind is that he had an
astonishing memory, with recall reminiscent of Luria's S., and that he
was astonishingly quick. There are many stories of people (brilliant
people) bringing problems to him that they had been working on for
months, and he would go from baseline up to their level of understanding
in minutes and then rapidly go further along the path than they had been
able to. But crucially, he went where they were going already, and where
they would have gone if given months more time to work. I've heard it
said that his mind was no different in character than that of the rest
of us, just thousands of times faster and with near-perfect recall. This
is contrasted with the mind of someone like Einstein, who didn't get to
general relativity by being the fastest traveler going down a known and
well-trodden path.

How does this relate to AGI? Well, without even needing to posit
hitherto undiscovered abilities, merely having the near-perfect memory
that an AGI would have and thinking thousands of times faster than a
base human gets you already to a von Neumann. And what would von Neumann
have been if he had been thousands of times faster still? It's entirely
possible that given enough speed, there is nothing solvable that could
not be solved.

(I don't mean to suggest that von Neumann was some kind of an
idiot-savant who had no creative ability at all; obviously he was in a
very small class of geniuses who touched most of the extant fields of
his day in deep and far-reaching ways. But still, I think it's helpful
to think of him as a kind of extreme lower bound on what AGI might be.)


 By saying that, you have addressed one of the big mistakes that people
make when trying to think about an AGI:  the mistake of assuming that it
would have to Think Different in order to Think Better.  In fact, it
would only have to Think Faster.

Yes, it isn't immortality, but living for a billion years would still be
very different than living for 80. The difference between an
astonishingly huge but incremental change and a change in kind is not so
great.

 The other significant mistake that people make is to think that it is
possible to speculate about how an AGI would function without first
having at least a reasonably clear idea about how minds in general are
supposed to function.  Why?  Because too often you hear comments like
An AGI *would* probably do [x]., when in fact the person speaking
knows so little about about how minds (human or other) really work, that
all they can really say is I have a vague hunch that maybe an AGI might
do [x], although I can't really say why it would

 I do not mean to personally criticise anyone for their lack of
knowledge of minds, when I say this.  What I do criticise is the lack of
caution, as when someone says it would when they should say there is
a chance  that it might

 The problem is, that 90% of everthing said about AGIs on this list
falls into that trap.


I agree that there seems to be overconfidence in the inevitability of
things turning out the way it is hoped they will turn out, and lack of
appreciation for the unknowns and the unknown unknowns. It's hardly
unique to this list though to not recognize the contingent nature of
things turning out the way they do.

-joseph

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78316106-039103