Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-20 Thread Steve Richfield
Ben,

Mapping RRA to Hegel's space isn't trivial, but here goes...

On 11/19/08, Ben Goertzel [EMAIL PROTECTED] wrote:


 I have nothing against Hegel; I think he was a great philosopher.  His
 Logic is really fantastic reading.  And, having grown up surrounded by
 Marxist wannabe-revolutionaries (most of whom backed away from strict
 Marxism in the mid-70s when the truth about the Soviet Union came out in
 America), I am also aware there is a lot of deep truth in Marx's thought, in
 spite of the evil that others wrought with it after his death...


It's refreshing to be able to discuss the structure of problems rather than
simply planning the future of the world as:
1.  We will build AGIs.
2.  The AGIs will create a Singularity.
3.  Then, something wonderful (or horrible) will happen.



 I just think that Hegel's dialectical philosophy is clearer than your
 reverse reductio ad absurdum,


That is because he saw a process that he didn't fully understand, leaving
the participants to argue their many positions for decades/centuries
until many consensus resolution were identified. Things always look simpler
when you ignore the necessary details.

BTW, there was once a government run by consensus - where all differences
were argued until everyone agreed. That was early Islam, first under Mohamed
and later under 4 subsequent caliphs who worked with Mohamed until his
death. Of course, this is ONLY possible given some sort of understanding of
RRA, yet historical accounts do NOT include anything like RRA (that I have
found). Then, things came unraveled. In a logical world (if this is even
possible given illogical people), consensus should be possible. Allowing for
a few idiots, it should take 90% majority to pass any law or do anything
that is potentially destructive (as though there were anything that a
government could do that is NOT potentially destructive). In short, the
whole rule by majority thing is severely flawed, though it may be OK to
choose representatives.



 and so I'm curious to know what you think your formulation *adds* to the
 classic Hegelian one...


A clear path to resolving differences rather than leaving it to unstructured
argument, compromise, etc., as Hegel did. It directly challenges BOTH sides
of an intractable dispute to seek and find the shared bad assumptions and
NOT compromise, or to shut up because they are simply not smart enough to
participate.



 From what I understand, your RRA heuristic says that, sometimes, when both
 X and ~X are appealing to rational people, there is some common assumption
 underlying the two, which when properly questioned and modified can yield a
 new Y that transcends and in some measure synthesizes aspects of X and ~X


Usually, neither X nor ~X are even deducible from Y. For example, in the
abortion debate, the pro-life side is happy because abortions are more
effectively stopped than if a law had been passed, and the pro-choice is
happy because there are no laws in place. Neither side can even get to the
contentious point that they were at before.



 I suppose Hegel would have called Y the dialectical synthesis of X and ~X,
 right?


Not being a Hegel scholar, that's the way that I see it. Hegel just failed
to take the next step of mapping out exactly how to reach a dialectical
synthesis, which is what RRA does.



 BTW, we are certainly not seeing the fall of capitalism now.  Marx's
 dialectics-based predictions made a lot of errors; for instance, both he and
 Hegel failed to see the emergence of the middle class as a sort of
 dialectical synthesis of the ruling class and the proletariat ;-)


... and America failed to see the coming disappearance of the middle class,
that throws society back into Marx's realm.



 ... but, I digress!!


I don't think so, as we are now thinking about things at the level that a
future AGI would have to be able to think at to provide societal guidance.
If we can't function at this level ourselves, how are we ever going to
create AGIs that do this?


 So, how would you apply your species of dialectics to solve the problem of
 consciousness?  This is a case where, clearly, rational intelligent and
 educated people hold wildly contradictory opinions,


... which is a pretty clear demonstration that consciousness doesn't work
very well. This was EXACTLY my point when discussing Dr. Eliza (that also
has its obvious limitations), that other methods can potentially avoid the
logical traps of the conscious process.



 e.g.




 X1 = consciousness does not exist

 X2 = consciousness is a special extra-physical entity that correlates with
 certain physical systems at certain times

 X3 = consciousness is a kind of physical entity

 X4 = consciousness is a property immanent in everything, that gets
 focused/structured differently via interaction with different physical
 systems

 All these positions contradict each other.  How do you suggest to
 dialectically synthesize them?  ;-)


No, properly restating the above question: 

Definition of pain (was Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...)

2008-11-19 Thread Matt Mahoney
--- On Tue, 11/18/08, Mark Waser [EMAIL PROTECTED] wrote:
 add-rule kill-file Matt Mahoney

Mark, whatever happened to that friendliness-religion you caught a few months 
ago?

Anyway, with regard to grounding, internal feedback, and volition, autobliss 
already has two of these three properties, and the third could be added with an 
insignificant effect.

With respect to grounding, I assume you mean association of language symbols 
with nonverbal input. For example, a text-based AI could associate the symbols 
red with rose and stop sign, but if it lacked vision then these symbols 
would not be grounded. To ground red it would need to be associated with red 
sensing pixels.

In this sense, autobliss has grounded the symbols aah and ouch which make 
up its limited language by associating them with the reinforcement signal. 
Thus, it adjusts its behavior to say ouch less often, which is just what a 
human would do if the negative reinforcement signal were pain. (Also, to 
address Jiri Jelinek's question, it makes no conceptual difference if we swap 
the symbols so that aah represents pain. I did it this way just to make it 
more clear what autobliss is doing. The essential property is reinforcement 
learning).

Also, autobliss has volition, meaning it has free will and makes decisions that 
increase its expected reward. Free will is implemented by the rand() function. 
Behaviorally, there is no distinction between free choice and random behavior. 
Belief in free will, which is a separate question, is implemented in humans by 
making random choices and then making up reasons that seem rational for making 
the choice we did. Monkeys do this too.
http://www.world-science.net/othernews/071106_rationalize.htm

Autobliss lacks internal feedback, although I don't see why this matters much. 
Neural networks often use lateral inhibition, activation fatigue, and weight 
decay as negative feedback loops to make them more stable. Autobliss has only 
one neuron (with 4 inputs) so lateral inhibition is not possible. However I 
could add weight decay by adding the following code inside the main loop:

  for (int i=0; i4; ++i)
mem[i] *= 0.99;

This would keep the input weights from getting too large, but also cause 
autobliss to slowly forget its lessons. It would require occasional 
reinforcement to correct its mistakes. However, this effect could be made 
arbitrarily small by using a decay factor arbitrarily close to 1.

Anyway, I don't expect this to resolve Mark's disagreement. Intuitively, 
everyone knows that autobliss doesn't really experience pain, so Mark will 
just keep adding conditions until nothing less than a human brain meets his 
requirements, all the time denying that he is making choices about what feels 
pain and what doesn't.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-19 Thread Daniel Yokomizo
On Tue, Nov 18, 2008 at 11:23 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 Steve, what is the purpose of your political litmus test? If you are trying
 to assemble a team of seed-AI programmers with the correct ethics, forget
 it. Seed AI is a myth.
 http://www.mattmahoney.net/agi2.html (section 2).

(I'm assuming you meant the section 5.1. Recursive Self Improvement)

Why do you call it a myth? Assuming that an AI (not necessarily
general) that is capable of software programming is possible and such
AI is created using software, it's entirely plausible that it would be
able to find places for improvement in its source code, be it in time
or space usage, concurrency and parallelism missed opportunities,
improved caching, more efficient data-structures, etc.. In such
scenario the AI would be able to create a better version of itself,
how many times this process can be done depend heavily on the
cognitive capabilities of the AI and it's performance.

If we move to an AGI, it would be able to come up with better tools
(e.g. compilers, type systems, programming languages), improve it's
substrate (e.g. write a better OS, rewrite its the performance
critical parts in FPGA), come up with better chips, etc., without even
needing to come up with new theories (i.e. there's sufficient
information already out there that, if synthesized, can lead to better
tools). This will result in another version of the AGI with better
software and hardware, reduced space/time usage and more concurrent.

We can come up with the argument that it'll only be a faster/leaner
AGI and it will get stuck coming up with bad ideas very quickly. But
if it's truly general it would, at least be able to come up with all
science/tech human beings are eventually capable of and if the AGI can
progress further it means humans can't also progress further. If
humans are able to progress than an AGI would be able to progress, at
least as quickly as humans but probably much faster (due to it's own
performance enhancements).

I am really interested to see your comments on this line of reasoning.

 -- Matt Mahoney, [EMAIL PROTECTED]

Best regards,
Daniel Yokomizo


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Seed AI (was Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...)

2008-11-19 Thread Matt Mahoney
--- On Wed, 11/19/08, Daniel Yokomizo [EMAIL PROTECTED] wrote:

 On Tue, Nov 18, 2008 at 11:23 PM, Matt Mahoney
 [EMAIL PROTECTED] wrote:
  Seed AI is a myth.
  http://www.mattmahoney.net/agi2.html (section 2).
 
 (I'm assuming you meant the section 5.1.
 Recursive Self Improvement)

That too, but mainly in the argument for the singularity:

If humans can produce smarter than human AI, then so can they, and faster

I am questioning the antecedent, not the consequent.

RSI is not a matter of an agent with IQ of 180 creating an agent with an IQ of 
190. Individual humans can't produce much of of anything beyond spears and 
clubs without the global economy in which we live. To count as self 
improvement, the global economy has to produce a smarter global economy. This 
is already happening.

My paper on RSI referenced in section 5.1 (and submitted to JAGI) only applies 
to systems without external input. It would apply to the unlikely scenario of a 
program that could understand its own source code and rewrite itself until it 
achieved vast intelligence while being kept in isolation for safety reasons. 
This scenario often came up on the SL4 list. It was referred to AI boxing. It 
was argued that a superhuman AI could easily trick its relatively stupid human 
guards into releasing it, and there were some experiments where people played 
the role of the AI and proved just that, even without vastly superior 
intelligence.

I think that the boxed AI approach has been discredited by now as being 
impractical to develop for reasons independent of its inherent danger and my 
proof that it is impossible. All of the serious projects in AI are taking place 
in open environments, often with data collected from the internet, for simple 
reasons of expediency. My argument against seed AI is in this type of 
environment. It is extremely expensive to produce a better global economy. The 
current economy is worth about US$ 1 quadrillion. No small group is going to 
control any significant part of it.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: Seed AI (was Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...)

2008-11-19 Thread Ben Goertzel
BTW, for those who are newbies to this list, Matt's argument attempting to
refute RSI was extensively discussed on this list a few months ago.

In my view, I refuted his argument pretty clearly, although he does not
agree.

His mathematics is correct, but seemed to me irrelevant to real-life RSI for
two reasons:

a) assuming a system isolated from the environment, which won't actually be
the case

b) using an intelligence measure focused solely on description length rather
than incorporating runtime

ben g

On Wed, Nov 19, 2008 at 10:21 AM, Matt Mahoney [EMAIL PROTECTED] wrote:

 --- On Wed, 11/19/08, Daniel Yokomizo [EMAIL PROTECTED] wrote:

  On Tue, Nov 18, 2008 at 11:23 PM, Matt Mahoney
  [EMAIL PROTECTED] wrote:
   Seed AI is a myth.
   http://www.mattmahoney.net/agi2.html (section 2).
 
  (I'm assuming you meant the section 5.1.
  Recursive Self Improvement)

 That too, but mainly in the argument for the singularity:

 If humans can produce smarter than human AI, then so can they, and faster

 I am questioning the antecedent, not the consequent.

 RSI is not a matter of an agent with IQ of 180 creating an agent with an IQ
 of 190. Individual humans can't produce much of of anything beyond spears
 and clubs without the global economy in which we live. To count as self
 improvement, the global economy has to produce a smarter global economy.
 This is already happening.

 My paper on RSI referenced in section 5.1 (and submitted to JAGI) only
 applies to systems without external input. It would apply to the unlikely
 scenario of a program that could understand its own source code and rewrite
 itself until it achieved vast intelligence while being kept in isolation for
 safety reasons. This scenario often came up on the SL4 list. It was referred
 to AI boxing. It was argued that a superhuman AI could easily trick its
 relatively stupid human guards into releasing it, and there were some
 experiments where people played the role of the AI and proved just that,
 even without vastly superior intelligence.

 I think that the boxed AI approach has been discredited by now as being
 impractical to develop for reasons independent of its inherent danger and my
 proof that it is impossible. All of the serious projects in AI are taking
 place in open environments, often with data collected from the internet, for
 simple reasons of expediency. My argument against seed AI is in this type of
 environment. It is extremely expensive to produce a better global economy.
 The current economy is worth about US$ 1 quadrillion. No small group is
 going to control any significant part of it.

 -- Matt Mahoney, [EMAIL PROTECTED]



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.  -- Robert Heinlein



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: Seed AI (was Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...)

2008-11-19 Thread Daniel Yokomizo
On Wed, Nov 19, 2008 at 1:21 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- On Wed, 11/19/08, Daniel Yokomizo [EMAIL PROTECTED] wrote:

 On Tue, Nov 18, 2008 at 11:23 PM, Matt Mahoney
 [EMAIL PROTECTED] wrote:
  Seed AI is a myth.
  http://www.mattmahoney.net/agi2.html (section 2).

 (I'm assuming you meant the section 5.1.
 Recursive Self Improvement)

 That too, but mainly in the argument for the singularity:

 If humans can produce smarter than human AI, then so can they, and faster

 I am questioning the antecedent, not the consequent.

 RSI is not a matter of an agent with IQ of 180 creating an agent with an IQ 
 of 190.

I just want to be clear, you agree that an agent is able to create a
better version of itself, not just in terms of a badly defined measure
as IQ but also as a measure of resource utilization.


 Individual humans can't produce much of of anything beyond spears and clubs 
 without the global economy in which we live. To count as self improvement, 
 the global economy has to produce a smarter global economy. This is already 
 happening.


Do you agree with the statement: the global economy in which we live
is a result of actions of human beings? How would it be different for
AGIs? Do you disagree that better agents would be able to build an
equivalent global economy much faster than the time it took humans
(assuming all the centuries it took since the last big ice age)?


 My paper on RSI referenced in section 5.1 (and submitted to JAGI) only 
 applies to systems without external input. It would apply to the unlikely 
 scenario of a program that could understand its own source code and rewrite 
 itself until it achieved vast intelligence while being kept in isolation for 
 safety reasons. This scenario often came up on the SL4 list. It was referred 
 to AI boxing. It was argued that a superhuman AI could easily trick its 
 relatively stupid human guards into releasing it, and there were some 
 experiments where people played the role of the AI and proved just that, even 
 without vastly superior intelligence.

 I think that the boxed AI approach has been discredited by now as being 
 impractical to develop for reasons independent of its inherent danger and my 
 proof that it is impossible. All of the serious projects in AI are taking 
 place in open environments, often with data collected from the internet, for 
 simple reasons of expediency. My argument against seed AI is in this type of 
 environment.


I'm asking for your comments on the technical issues regardind seed AI
and RSI, regardless of environment. Is there any technical
impossibilities for an AGI to improve its own code in all possible
environments? Also it's not clear to me which types of environments
(if it's the boxing that makes it impossible, if it's an open
environment with access to the internet, if it's both or neither) you
see problems with RSI, could you ellaborate it further?


 It is extremely expensive to produce a better global economy. The current 
 economy is worth about US$ 1 quadrillion. No small group is going to control 
 any significant part of it.

I want to keep this discussion focused on the technical
impossibilities of RSI, so I'm going to ignore for now this side
discussion about the global economy but later we can go back to it.

 -- Matt Mahoney, [EMAIL PROTECTED]

Best regards,
Daniel Yokomizo


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-19 Thread Steve Richfield
Ben:

On 11/18/08, Ben Goertzel [EMAIL PROTECTED] wrote:


 This sounds an awful lot like the Hegelian dialectical method...


Your point being?

We are all stuck in Hegal's Hell whether we like it or not. Reverse Reductio
ad Absurdum is just a tool to help guide us through it.

There seems to be a human tendency to say that something sounds an awful
lot like (something bad) to dismiss it, but the crucial thing is often the
details rather than the broad strokes. For example, the Communist Manifesto
detailed the coming fall of Capitalism, which we may now be seeing in the
current financial crisis. Sure, the solution proved to be worse than the
problem, but that doesn't mean that the identification of the problems was
in error.

From what I can see, ~100% of the (mis?)perceived threat from AGI comes from
a lack of understanding of RRAA (Reverse Reductio ad Absurdum), both by
those working in AGI and those by the rest of the world. This clearly has
the potential of affecting your own future success, so it is probably worth
the extra 10 minutes or so to dig down to the very bottom of it, understand
it, discuss it, and then take your reasoned position regarding it. After
all, your coming super-intelligent AGI will probably have to master RRAA to
be able to resolve intractable disputes, so you will have to be on top of
RRAA if you are to have any chance of debugging your AGI.

Steve Richfield
==

  On Tue, Nov 18, 2008 at 5:29 PM, Steve Richfield 
 [EMAIL PROTECTED] wrote:

 Martin,

 On 11/18/08, martin biehl [EMAIL PROTECTED] wrote:

 I don't know what reverse reductio ad absurdum is, so it may not be a
 precise counterexample, but I think you get my point.


 HERE is the crux of my argument, as other forms of logic fall short of
 being adequate to run a world with. Reverse Reductio ad Absurdum is the
 first logical tool with the promise to resolve most intractable disputes,
 ranging from the abortion debate to the middle east problem.

 Some people get it easily, and some require long discussions, so I'll post
 the Cliff Notes version here, and if you want it in smaller doses, just
 send me an off-line email and we can talk on the phone.

 Reductio ad absurdum has worked unerringly for centuries to test bad
 assumptions. This constitutes a proof by lack of counterexample that the
 ONLY way to reach an absurd result is by a bad assumption, as otherwise,
 reductio ad absurdum would sometimes fail.

 Hence, when two intelligent people reach conflicting conclusions, but
 neither can see any errors in the other's logic, it would seem that they
 absolutely MUST have at least one bad assumption. Starting from the
 absurdity and searching for the assumption is where the reverse in reverse
 reductio ad absurdum comes in.

 If their false assumptions were different, than one or both parties would
 quickly discover them in discussion. However, when the argument stays on the
 surface, the ONLY place remaining to hide an invalid assumption is that they
 absolutely MUSH share the SAME invalid assumptions.

 Of course if our superintelligent AGI approaches them and points out their
 shared invalid assumption, then they would probably BOTH attack the AGI, as
 their invalid assumption may be their only point of connection. It appears
 that breaking this deadlock absolutely must involve first teaching both
 parties what reverse reductio ad absurdum is all about, as I am doing here.

 For example, take the abortion debate. It is obviously crazy to be making
 and killing babies, and it is a proven social disaster to make this illegal
 - an obvious reverse reductio ad absurdum situation.

 OK, so lets look at societies where abortion is no issue at all, e.g.
 Muslim societies, where it is freely available, but no one gets them. There,
 children are treated as assets, where in all respects we treat them as
 liabilities. Mothers are stuck with unwanted children. Fathers must pay
 child support, They can't be bought or sold. There is no expectation that
 they will look after their parents in their old age, etc.

 In short, BOTH parties believe that children should be treated as
 liabilities, but when you point this out, they dispute the claim. Why should
 mothers be stuck with unwanted children? Why not allow sales to parties who
 really want them? There are no answers to these and other similar questions
 because the underlying assumption is clearly wrong.

 The middle east situation is more complex but constructed on similar
 invalid assumptions.

 Are we on the same track now?

 Steve Richfield
  

 2008/11/18 Steve Richfield [EMAIL PROTECTED]

  To all,

 I am considering putting up a web site to filter the crazies as
 follows, and would appreciate all comments, suggestions, etc.

 Everyone visiting the site would get different questions, in different
 orders, etc. Many questions would have more than one correct answer, and in
 many cases, some combinations of otherwise 

Re: Seed AI (was Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...)

2008-11-19 Thread Steve Richfield
Back to reality for a moment...

I have greatly increased the IQs of some pretty bright people since I
started doing this in 2001 (the details are way off topic here, so contact
me off-line for more if you are interested), and now, others are also doing
this. I think that these people give us a tiny glimpse into what directions
an AGI might do. Here are my impressions:

1. They come up with some really bright stuff, like Mike's FQ theory of how
like-minded groups of people tend to stagnate technology, which few people
can grasp in the minute or so that is available to interest other people.
Hence, their ideas do NOT spread widely, except among others who are bright
enough to get it fairly quickly. From what I have seen, their enhanced IQs
haven't done much for their life success as measured in dollars, but they
have gone in very different directions than they were previously headed, now
that they have some abilities that they didn't previously have.

2.  Enhancing their IQs did NOT seem to alter their underlying belief
system. For example, Dan was and still remains a Baptist minister. However,
he now reads more passages as being metaphorical. We have no problem
carrying on lively political and religious discussions from our VERY
different points of view, with each of us translating our thoughts into the
other's paradigm.

3.  Blind ambition seemed to disappear, being replaced with a long view of
things. They seem to be nicer people for the experience. However, given
their long view, I wouldn't ever recommend becoming an adversary, as they
have no problem with gambits - loosing a skirmish to facilitate winning a
greater battle. If you think you are winning, then you had best stop and
look where this might all end up.

4.  They view most people a little like honey bees - useful but stupid. They
often attempt to help others by pointing them in better directions, but
after little/no success for months/years, they eventually give up and just
let everyone destroy their lives and kill themselves. This results in what
might at first appear to be a callous disregard for human life, but which in
reality is just a realistic view of the world. I suspect that future AGIs
would encounter the same effect.

Hence, unless/until someone displays some reason why an AGI might want to
take over the world, I remain unconcerned. What DOES concern me is stupid
people who think that the population can be controlled, without allowing for
the few bright people who can figure out how to be the butterfly that starts
the hurricane, as chaos theory presumes non-computability of things that, if
computable, will be computed. The resulting hurricane might be blamed on the
butterfly, when in reality, there would have been a hurricane anyway - it
just would have been somewhat different. In short, don't blame the AGI for
the fallen bodies of those who would exert unreasonable control.

I see the hope for the future being in the hands of these cognitively
enhanced people. It shouldn't be too much longer until these people start
rising to the top of the AI (and other) ranks. Imagine Loosemore with dozens
more IQ points and the energy to go along with it. Hence, it will be these
people who will make the decisions as to whether we have AGIs and what their
place in the future is.

Then, modern science will be reformed enough to avoid having unfortunate
kids have their metabolic control systems trashed by general anesthetics,
etc. (now already being done at many hospitals, including U of W and
Evergreen here in the Seattle area), and we will stop making people who can
be cognitively enhanced. Note that for every such candidate person, there
are dozens of low IQ gas station attendants, etc., who was subjected to the
same stress, but didn't do so well. Then, either we will have our AGIs in
place, or with no next generation of cognitively enhanced people, we will be
back to the stone age of stupid people. Society has ~50 years to make their
AGI work before this generation of cognitively enhanced people is gone.

Alternatively, some society might intentionally trash kids metabolism just
to induce this phenomenon, as a means to secure control when things crash.
At that point, either there is an AGI to take over, or that society will
take over.

In short, this is a complex area that is really worth understanding if you
are interested in where things are going.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: Seed AI (was Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...)

2008-11-19 Thread Matt Mahoney
--- On Wed, 11/19/08, Daniel Yokomizo [EMAIL PROTECTED] wrote:

 I just want to be clear, you agree that an agent is able to create a
 better version of itself, not just in terms of a badly defined measure
 as IQ but also as a measure of resource utilization.

Yes, even bacteria can do this.

 Do you agree with the statement: the global economy in which we live
 is a result of actions of human beings? How would it be different for
 AGIs? Do you disagree that better agents would be able to build an
 equivalent global economy much faster than the time it took humans
 (assuming all the centuries it took since the last big ice age)?

You cannot separate AGI from the human dominated economy. AGI cannot produce 
smarter AGI without help from the 10^10 humans that are already here until 
machines have completely replaced the humans.

 I'm asking for your comments on the technical issues regardind seed AI
 and RSI, regardless of environment. Is there any technical
 impossibilities for an AGI to improve its own code in all possible
 environments? Also it's not clear to me which types of environments
 (if it's the boxing that makes it impossible, if it's an open
 environment with access to the internet, if it's both or neither) you
 see problems with RSI, could you ellaborate it further?

My paper on RSI refutes one proposed approach to AGI, which would be a self 
improving system developed in isolation. I think that is good because such a 
system would be very dangerous if it were possible. However, I am not aware of 
any serious proposals to do it this way, simply because cutting yourself off 
from the internet just makes the problem harder.

To me, RSI in an open environment is not pure RSI. It is a combination of self 
improvement and learning. My position on this approach is not that it won't 
work but that the problem is not as easy as it seems. I believe that if you do 
manage to create an AGI that is n times smarter than a human, then the result 
would be the same as if you hired O(n log n) people. (The factor of log n 
allows for communication overhead and overlapping knowledge). We don't really 
know what it means to be n times smarter, since we have no way to test it. But 
we would expect that such an AGI could work n times faster, learn n times 
faster, know n times as much, make n times as much money, and make prediction 
as accurately as a vote by n people. I am not sure what other measures we could 
apply that would distinguish greater intelligence from just more people.

So to make real progress, you need to make AGI cheaper than human labor for n = 
about 10^9. And that is expensive. The global economy has a complexity of 10^17 
to 10^18 bits. Most of that knowledge is not written down. It is in human 
brains. Unless we develop new technology like brain scanning, the only way to 
extract it is by communication at the rate of 2 bits per second per person.

 I want to keep this discussion focused on the technical
 impossibilities of RSI, so I'm going to ignore for now this side
 discussion about the global economy but later we can go
 back to it.

My AGI proposal does not require any technical breakthroughs. But for something 
this expensive, you can't ignore the economic model. It has to be 
decentralized, and there has to be economic incentives for people to transfer 
their knowledge to it, and it has to be paid for. That is the obstacle you need 
to think about.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


[agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
To all,

I am considering putting up a web site to filter the crazies as follows,
and would appreciate all comments, suggestions, etc.

Everyone visiting the site would get different questions, in different
orders, etc. Many questions would have more than one correct answer, and in
many cases, some combinations of otherwise reasonable individual answers
would fail. There would be optional tutorials for people who are not
confident with the material. After successfully navigating the site, an
applicant would submit their picture and signature, and we would then
provide a license number. The applicant could then provide their name and
number to 3rd parties to verify that the applicant is at least capable of
rational thought. This information would look much like a driver's license,
and could be printed out as needed by anyone who possessed a correct name
and number.

The site would ask a variety of logical questions, most especially probing
into:
1.  Their understanding of Reverse Reductio ad Absurdum methods of resolving
otherwise intractable disputes.
2.  Whether they belong to or believe in any religion that supports various
violent acts (with quotes from various religious texts). This would exclude
pretty much every religion, as nearly all religions condone useless violence
of various sorts, or the toleration or exposure of violence toward others.
Even Buddhists resist MAD (Mutually Assured Destruction) while being unable
to propose any potentially workable alternative to nuclear war. Jesus
attacked the money changers with no hope of benefit for anyone. Mohammad
killed the Jewish men of Medina and sold their women and children into
slavery, etc., etc.
3.  A statement in their own words that they hereby disavow allegiance
to any non-human god or alien entity, and that they will NOT follow the
directives of any government led by people who would obviously fail this
test. This statement would be included on the license.

This should force many people off of the fence, as they would have to choose
between sanity and Heaven (or Hell).

Then, Ben, the CIA, diplomats, etc., could verify that they are dealing with
people who don't have any of the common forms of societal insanity. Perhaps
the site should be multi-lingual?

Any and all thoughts are GREATLY appreciated.

Thanks

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread martin biehl
Hi Steve

I am not an expert so correct me if I am wrong. As I see it every day
logical arguments (and rationality?) are based on standard classical logic
(or something very similar). Yet I am (sadly) not aware of a convincing
argument that this logic is the one to accept as the right choice. You might
know that e.g. intuitionistic logic limits the power of reductio ad absurdum
to negative statements (I don't know what reverse reductio ad absurdum is,
so it may not be a precise counterexample, but I think you get my point).
Would this not make you hesitate? If not, why?

Cheers,

Martin Biehl

2008/11/18 Steve Richfield [EMAIL PROTECTED]

 To all,

 I am considering putting up a web site to filter the crazies as follows,
 and would appreciate all comments, suggestions, etc.

 Everyone visiting the site would get different questions, in different
 orders, etc. Many questions would have more than one correct answer, and in
 many cases, some combinations of otherwise reasonable individual answers
 would fail. There would be optional tutorials for people who are not
 confident with the material. After successfully navigating the site, an
 applicant would submit their picture and signature, and we would then
 provide a license number. The applicant could then provide their name and
 number to 3rd parties to verify that the applicant is at least capable of
 rational thought. This information would look much like a driver's license,
 and could be printed out as needed by anyone who possessed a correct name
 and number.

 The site would ask a variety of logical questions, most especially probing
 into:
 1.  Their understanding of Reverse Reductio ad Absurdum methods of
 resolving otherwise intractable disputes.
 2.  Whether they belong to or believe in any religion that supports various
 violent acts (with quotes from various religious texts). This would exclude
 pretty much every religion, as nearly all religions condone useless violence
 of various sorts, or the toleration or exposure of violence toward others.
 Even Buddhists resist MAD (Mutually Assured Destruction) while being unable
 to propose any potentially workable alternative to nuclear war. Jesus
 attacked the money changers with no hope of benefit for anyone. Mohammad
 killed the Jewish men of Medina and sold their women and children into
 slavery, etc., etc.
 3.  A statement in their own words that they hereby disavow allegiance
 to any non-human god or alien entity, and that they will NOT follow the
 directives of any government led by people who would obviously fail this
 test. This statement would be included on the license.

 This should force many people off of the fence, as they would have to
 choose between sanity and Heaven (or Hell).

 Then, Ben, the CIA, diplomats, etc., could verify that they are dealing
 with people who don't have any of the common forms of societal insanity.
 Perhaps the site should be multi-lingual?

 Any and all thoughts are GREATLY appreciated.

 Thanks

 Steve Richfield

  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Bob Mottram
2008/11/18 Steve Richfield [EMAIL PROTECTED]:
 I am considering putting up a web site to filter the crazies as follows,
 and would appreciate all comments, suggestions, etc.


This all sounds peachy in principle, but I expect it would exclude
virtually everyone except perhaps a few of the most diehard
philosophers.  I think most people have at least a few beliefs which
cannot be strictly justified rationally, and that would include many
AI researchers.  Irrational or inconsistent beliefs originate from
being an entity with finite resources - finite experience and finite
processing power and time with which to analyze the data.  Many people
use quick lookups handed to them by individuals considered to be of
higher social status, principally because they don't have time or
inclination to investigate the issues directly themselves.

In religion and politics people's beliefs and convictions are in
almost every case gotten at second-hand, and without examination, from
authorities who have not themselves examined the questions at issue
but have taken them at second-hand from other non-examiners, whose
opinions about them were not worth a brass farthing. - Mark Twain


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Trent Waddington
On Tue, Nov 18, 2008 at 8:38 PM, Bob Mottram [EMAIL PROTECTED] wrote:
 I think most people have at least a few beliefs which cannot be strictly 
 justified rationally

You would think that.  :)

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Ben Goertzel
 3.  A statement in their own words that they hereby disavow allegiance
 to any non-human god or alien entity, and that they will NOT follow the
 directives of any government led by people who would obviously fail this
 test. This statement would be included on the license.



Hmmm... don't I fail this test every time I follow the speed limit ?   ;-)

As another aside, it seems wrong to accuse Buddhists of condoning violence
because they don't like MAD (which involves stockpiling nukes) ... you could
accuse them of foolishness perhaps (though I don't necessarily agree) but
not of condoning violence

My feeling is that with such a group of intelligent and individualistic
folks as transhumanists and AI researchers are, any  litmus test for
cognitive sanity you come up with is gonna be quickly revealed to be full
of loopholes that lead to endless philosophical discussions... so that in
the end, such a test could only be used as a general guide, with the
ultimate cognitive-sanity-test to be made on a qualitative basis

In a small project like Novamente, we can evaluate each participant
individually to assess their thought process and background.  In a larger
project like OpenCog, there is not much control over who gets involved, but
making people sign a form promising to be rational and cognitively sane
wouldn't seem to help much, as obviously there is nothing forcing people to
be honest...

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Richard Loosemore

Steve Richfield wrote:

To all,
 
I am considering putting up a web site to filter the crazies as 
follows, and would appreciate all comments, suggestions, etc.
 
Everyone visiting the site would get different questions, in different 
orders, etc. Many questions would have more than one correct answer, and 
in many cases, some combinations of otherwise reasonable individual 
answers would fail. There would be optional tutorials for people who are 
not confident with the material. After successfully navigating the site, 
an applicant would submit their picture and signature, and we would then 
provide a license number. The applicant could then provide their name 
and number to 3rd parties to verify that the applicant is at least 
capable of rational thought. This information would look much like a 
driver's license, and could be printed out as needed by anyone who 
possessed a correct name and number.
 
The site would ask a variety of logical questions, most especially 
probing into:
1.  Their understanding of Reverse Reductio ad Absurdum methods of 
resolving otherwise intractable disputes.
2.  Whether they belong to or believe in any religion that supports 
various violent acts (with quotes from various religious texts). This 
would exclude pretty much every religion, as nearly all religions 
condone useless violence of various sorts, or the toleration or exposure 
of violence toward others. Even Buddhists resist MAD (Mutually Assured 
Destruction) while being unable to propose any potentially workable 
alternative to nuclear war. Jesus attacked the money changers with no 
hope of benefit for anyone. Mohammad killed the Jewish men of Medina and 
sold their women and children into slavery, etc., etc.
3.  A statement in their own words that they hereby disavow allegiance 
to any non-human god or alien entity, and that they will NOT follow the 
directives of any government led by people who would obviously fail this 
test. This statement would be included on the license.
 
This should force many people off of the fence, as they would have to 
choose between sanity and Heaven (or Hell).
 
Then, Ben, the CIA, diplomats, etc., could verify that they are dealing 
with people who don't have any of the common forms of societal insanity. 
Perhaps the site should be multi-lingual?
 
Any and all thoughts are GREATLY appreciated.
 
Thanks
 
Steve Richfield


I see how this would work:  crazy people never tell lies, so you'd be 
able to nail 'em when they gave the wrong answers.



8-|



Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread BillK
On Tue, Nov 18, 2008 at 1:22 PM, Richard Loosemore wrote:

 I see how this would work:  crazy people never tell lies, so you'd be able
 to nail 'em when they gave the wrong answers.



Yup. That's how they pass lie detector tests as well.

They sincerely believe the garbage they spread around.


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
Martin,

On 11/18/08, martin biehl [EMAIL PROTECTED] wrote:

 I don't know what reverse reductio ad absurdum is, so it may not be a
 precise counterexample, but I think you get my point.


HERE is the crux of my argument, as other forms of logic fall short of being
adequate to run a world with. Reverse Reductio ad Absurdum is the first
logical tool with the promise to resolve most intractable disputes, ranging
from the abortion debate to the middle east problem.

Some people get it easily, and some require long discussions, so I'll post
the Cliff Notes version here, and if you want it in smaller doses, just
send me an off-line email and we can talk on the phone.

Reductio ad absurdum has worked unerringly for centuries to test bad
assumptions. This constitutes a proof by lack of counterexample that the
ONLY way to reach an absurd result is by a bad assumption, as otherwise,
reductio ad absurdum would sometimes fail.

Hence, when two intelligent people reach conflicting conclusions, but
neither can see any errors in the other's logic, it would seem that they
absolutely MUST have at least one bad assumption. Starting from the
absurdity and searching for the assumption is where the reverse in reverse
reductio ad absurdum comes in.

If their false assumptions were different, than one or both parties would
quickly discover them in discussion. However, when the argument stays on the
surface, the ONLY place remaining to hide an invalid assumption is that they
absolutely MUSH share the SAME invalid assumptions.

Of course if our superintelligent AGI approaches them and points out their
shared invalid assumption, then they would probably BOTH attack the AGI, as
their invalid assumption may be their only point of connection. It appears
that breaking this deadlock absolutely must involve first teaching both
parties what reverse reductio ad absurdum is all about, as I am doing here.

For example, take the abortion debate. It is obviously crazy to be making
and killing babies, and it is a proven social disaster to make this illegal
- an obvious reverse reductio ad absurdum situation.

OK, so lets look at societies where abortion is no issue at all, e.g. Muslim
societies, where it is freely available, but no one gets them. There,
children are treated as assets, where in all respects we treat them as
liabilities. Mothers are stuck with unwanted children. Fathers must pay
child support, They can't be bought or sold. There is no expectation that
they will look after their parents in their old age, etc.

In short, BOTH parties believe that children should be treated as
liabilities, but when you point this out, they dispute the claim. Why should
mothers be stuck with unwanted children? Why not allow sales to parties who
really want them? There are no answers to these and other similar questions
because the underlying assumption is clearly wrong.

The middle east situation is more complex but constructed on similar invalid
assumptions.

Are we on the same track now?

Steve Richfield
 

 2008/11/18 Steve Richfield [EMAIL PROTECTED]

  To all,

 I am considering putting up a web site to filter the crazies as follows,
 and would appreciate all comments, suggestions, etc.

 Everyone visiting the site would get different questions, in different
 orders, etc. Many questions would have more than one correct answer, and in
 many cases, some combinations of otherwise reasonable individual answers
 would fail. There would be optional tutorials for people who are not
 confident with the material. After successfully navigating the site, an
 applicant would submit their picture and signature, and we would then
 provide a license number. The applicant could then provide their name and
 number to 3rd parties to verify that the applicant is at least capable of
 rational thought. This information would look much like a driver's license,
 and could be printed out as needed by anyone who possessed a correct name
 and number.

 The site would ask a variety of logical questions, most especially probing
 into:
 1.  Their understanding of Reverse Reductio ad Absurdum methods of
 resolving otherwise intractable disputes.
 2.  Whether they belong to or believe in any religion that supports
 various violent acts (with quotes from various religious texts). This would
 exclude pretty much every religion, as nearly all religions condone useless
 violence of various sorts, or the toleration or exposure of violence toward
 others. Even Buddhists resist MAD (Mutually Assured Destruction) while being
 unable to propose any potentially workable alternative to nuclear war. Jesus
 attacked the money changers with no hope of benefit for anyone. Mohammad
 killed the Jewish men of Medina and sold their women and children into
 slavery, etc., etc.
 3.  A statement in their own words that they hereby disavow allegiance
 to any non-human god or alien entity, and that they will NOT follow the
 directives of any government led by people 

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Ben Goertzel
This sounds an awful lot like the Hegelian dialectical method...

ben g

On Tue, Nov 18, 2008 at 5:29 PM, Steve Richfield
[EMAIL PROTECTED]wrote:

 Martin,

 On 11/18/08, martin biehl [EMAIL PROTECTED] wrote:

 I don't know what reverse reductio ad absurdum is, so it may not be a
 precise counterexample, but I think you get my point.


 HERE is the crux of my argument, as other forms of logic fall short of
 being adequate to run a world with. Reverse Reductio ad Absurdum is the
 first logical tool with the promise to resolve most intractable disputes,
 ranging from the abortion debate to the middle east problem.

 Some people get it easily, and some require long discussions, so I'll post
 the Cliff Notes version here, and if you want it in smaller doses, just
 send me an off-line email and we can talk on the phone.

 Reductio ad absurdum has worked unerringly for centuries to test bad
 assumptions. This constitutes a proof by lack of counterexample that the
 ONLY way to reach an absurd result is by a bad assumption, as otherwise,
 reductio ad absurdum would sometimes fail.

 Hence, when two intelligent people reach conflicting conclusions, but
 neither can see any errors in the other's logic, it would seem that they
 absolutely MUST have at least one bad assumption. Starting from the
 absurdity and searching for the assumption is where the reverse in reverse
 reductio ad absurdum comes in.

 If their false assumptions were different, than one or both parties would
 quickly discover them in discussion. However, when the argument stays on the
 surface, the ONLY place remaining to hide an invalid assumption is that they
 absolutely MUSH share the SAME invalid assumptions.

 Of course if our superintelligent AGI approaches them and points out their
 shared invalid assumption, then they would probably BOTH attack the AGI, as
 their invalid assumption may be their only point of connection. It appears
 that breaking this deadlock absolutely must involve first teaching both
 parties what reverse reductio ad absurdum is all about, as I am doing here.

 For example, take the abortion debate. It is obviously crazy to be making
 and killing babies, and it is a proven social disaster to make this illegal
 - an obvious reverse reductio ad absurdum situation.

 OK, so lets look at societies where abortion is no issue at all, e.g.
 Muslim societies, where it is freely available, but no one gets them. There,
 children are treated as assets, where in all respects we treat them as
 liabilities. Mothers are stuck with unwanted children. Fathers must pay
 child support, They can't be bought or sold. There is no expectation that
 they will look after their parents in their old age, etc.

 In short, BOTH parties believe that children should be treated as
 liabilities, but when you point this out, they dispute the claim. Why should
 mothers be stuck with unwanted children? Why not allow sales to parties who
 really want them? There are no answers to these and other similar questions
 because the underlying assumption is clearly wrong.

 The middle east situation is more complex but constructed on similar
 invalid assumptions.

 Are we on the same track now?

 Steve Richfield
  

 2008/11/18 Steve Richfield [EMAIL PROTECTED]

  To all,

 I am considering putting up a web site to filter the crazies as
 follows, and would appreciate all comments, suggestions, etc.

 Everyone visiting the site would get different questions, in different
 orders, etc. Many questions would have more than one correct answer, and in
 many cases, some combinations of otherwise reasonable individual answers
 would fail. There would be optional tutorials for people who are not
 confident with the material. After successfully navigating the site, an
 applicant would submit their picture and signature, and we would then
 provide a license number. The applicant could then provide their name and
 number to 3rd parties to verify that the applicant is at least capable of
 rational thought. This information would look much like a driver's license,
 and could be printed out as needed by anyone who possessed a correct name
 and number.

 The site would ask a variety of logical questions, most especially
 probing into:
 1.  Their understanding of Reverse Reductio ad Absurdum methods of
 resolving otherwise intractable disputes.
 2.  Whether they belong to or believe in any religion that supports
 various violent acts (with quotes from various religious texts). This would
 exclude pretty much every religion, as nearly all religions condone useless
 violence of various sorts, or the toleration or exposure of violence toward
 others. Even Buddhists resist MAD (Mutually Assured Destruction) while being
 unable to propose any potentially workable alternative to nuclear war. Jesus
 attacked the money changers with no hope of benefit for anyone. Mohammad
 killed the Jewish men of Medina and sold their women and children into
 slavery, etc., 

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
Bob,

On 11/18/08, Bob Mottram [EMAIL PROTECTED] wrote:

 2008/11/18 Steve Richfield [EMAIL PROTECTED]:
  I am considering putting up a web site to filter the crazies as
 follows,
  and would appreciate all comments, suggestions, etc.


 This all sounds peachy in principle, but I expect it would exclude
 virtually everyone except perhaps a few of the most diehard
 philosophers.


My goal is to identify those people who:
1.  Are capable of rational thought, whether or not they chose to use that
ability. I plan to test this with some simple problem solving.
2.  Are not SO connected with some shitforbrains religious group/belief that
they would predictably use dangerous technology to harm others. I plan to
test this by simply demanding a declaration, which would send most such
believers straight to Hell.

Beyond that, I agree that it starts to get pretty hopeless.

I think most people have at least a few beliefs which
 cannot be strictly justified rationally, and that would include many
 AI researchers.


... and probably include both of us as well.

Irrational or inconsistent beliefs originate from
 being an entity with finite resources - finite experience and finite
 processing power and time with which to analyze the data.  Many people
 use quick lookups handed to them by individuals considered to be of
 higher social status, principally because they don't have time or
 inclination to investigate the issues directly themselves.


However, when someone (like me) points out carefully selected passages that
are REALLY crazy, then do they re-evaluate, or continue to accept everything
they see in the book?

In religion and politics people's beliefs and convictions are in
 almost every case gotten at second-hand, and without examination, from
 authorities who have not themselves examined the questions at issue
 but have taken them at second-hand from other non-examiners, whose
 opinions about them were not worth a brass farthing. - Mark Twain


I completely agree. The question here is whether these people are capable of
questioning and re-evaluation. If so, then they get their license.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
Ben,

On 11/18/08, Ben Goertzel [EMAIL PROTECTED] wrote:


  3.  A statement in their own words that they hereby disavow allegiance
 to any non-human god or alien entity, and that they will NOT follow the
 directives of any government led by people who would obviously fail this
 test. This statement would be included on the license.



 Hmmm... don't I fail this test every time I follow the speed limit ?   ;-)


I don't think I stated this well, and perhaps you might be able to say it
better.

If your government wants you to go out and kill people, or help others to go
out and kill people, and you don't see some glimmer of understanding from
the leaders that this is really stupid, then perhaps you shouldn't
contribute to such insanity.

Then, just over this fence to help define the boundary...

Look at the Star Wars anti-missile defense system. It can't possibly ever
work well, as countermeasures are SO simple to implement. However, it was
quite effective in bankrupting the Soviet Union, while people like me were
going around and lecturing about horrible waste of public resources it was.

In short, I think that re-evaluation is necessary at about the point where
blood starts flowing. What are your thoughts?

 As another aside, it seems wrong to accuse Buddhists of condoning violence
 because they don't like MAD (which involves stockpiling nukes) ... you could
 accuse them of foolishness perhaps (though I don't necessarily agree) but
 not of condoning violence


I have hours of discussion with Buddhists invested in this. I have no
problem at all with them getting themselves killed, but I have a BIG problem
with their asserting their beliefs to get OTHERS killed. If we had a
Buddhist President who kept MAD from being implemented, there is a pretty
good chance that we would not be here to have this discussion.

As an aside, when you look CAREFULLY at the events that were unfolding as
MAD was implemented, there really isn't anything at all against Buddhist
beliefs in it - just a declaration that if you attack me, that I will attack
in return, but without restraint against civilian targets.

 My feeling is that with such a group of intelligent and individualistic
 folks as transhumanists and AI researchers are, any  litmus test for
 cognitive sanity you come up with is gonna be quickly revealed to be full
 of loopholes that lead to endless philosophical discussions... so that in
 the end, such a test could only be used as a general guide, with the
 ultimate cognitive-sanity-test to be made on a qualitative basis


I guess that this is really what I was looking for - just what is that
basis? For example, if someone can lie and answer questions in a logical
manner just to get their license, then they have proven that they can be
logical, whether or not they chose to be. I think that is about as good as
is possible.

 In a small project like Novamente, we can evaluate each participant
 individually to assess their thought process and background.  In a larger
 project like OpenCog, there is not much control over who gets involved, but
 making people sign a form promising to be rational and cognitively sane
 wouldn't seem to help much, as obviously there is nothing forcing people to
 be honest...


... other than their sure knowledge that they will go directly to Hell for
even listening and considering such as we are discussing here.

The Fiq is a body of work outside the Koran that is part of Islam, which
includes stories of Mohamed's life, etc. Therein the boundary is precisely
described.

Islam demands that anyone who converts from Islam be killed.

One poor fellow watched both of his parents refuse to renounce Islam, and
then be killed by invaders. When it came to his turn, he quickly renounced
to save his life. Now that he was being considered for execution, the ruling
from Mohamed: If they ask you again, then renounce again. and he was
released.

BTW, it would be really stupid of me to try to enforce a different standard
than you and other potential users of such a site would embrace, so my goal
here is not only to discuss potential construction of such a site, but also
to discuss just what that standard is. Hence, take my words as open for
editing.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
Richard and Bill,

On 11/18/08, BillK [EMAIL PROTECTED] wrote:

 On Tue, Nov 18, 2008 at 1:22 PM, Richard Loosemore wrote:
  I see how this would work:  crazy people never tell lies, so you'd be
 able
  to nail 'em when they gave the wrong answers.

Yup. That's how they pass lie detector tests as well.

 They sincerely believe the garbage they spread around.


In 1994 I was literally sold into servitude in Saudi Arabia as a sort of
slave programmer (In COBOL on HP-3000 computers) to the Royal Saudi Air
Force. I managed to escape that situation with the help of the same
Wahhabist Sunni Muslims that are now causing so many problems. With that
background, I think I understand them better than most people.

As in all other societies, they are not given the whole truth, e.g. most
have never heard of the slaughter at Medina, and believe that Mohamed never
hurt anyone at all.

My hope and expectation is that, by allowing people to research various
issues as they work on their test, that a LOT of people who might otherwise
fail the test will instead reevaluate their beliefs, at least enough to come
up with the right answers, whether or not they truly believe them. At least
that level of understanding assures that they can carry on a reasoned
conversation. This is a MAJOR problem now. Even here on this forum, many
people still don't get *reverse* reductio ad absurdum.

BTW, I place most of the blame for the middle east impasse on the West
rather than on the East. The Koran says that most of the evil in the world
is done by people who think they are doing good, which brings with it a good
social mandate to publicly reconsider and defend any actions that others
claim to be evil. The next step is to proclaim evil doers as unwitting
agents of Satan. If there is still no good defense, then they drop the
unwitting. Of course, us stupid uncivilized Westerners have fallen into
this, and so 19 brave men sacrificed their lives just to get our attention,
but even that failed to work as planned. Just what DOES it take to get our
attention - a nuke in NYC? What the West has failed to realize is that they
are playing a losing hand, but nonetheless, they just keep increasing the
bet on the expectation that the other side will fold. They won't. I was as
much intending my test for the sort of stupidity that nearly all Americans
harbor as that carried by Al Queda. Neither side seems to be playing with a
full deck.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Benjamin Johnston
 

Could we please stick to discussion of AGI?

 

-Ben

 

From: Steve Richfield [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, 19 November 2008 10:39 AM
To: agi@v2.listbox.com
Subject: Re: [agi] My prospective plan to neutralize AGI and other dangerous
technologies...

 

Richard and Bill,

On 11/18/08, BillK [EMAIL PROTECTED] wrote: 

On Tue, Nov 18, 2008 at 1:22 PM, Richard Loosemore wrote:
 I see how this would work:  crazy people never tell lies, so you'd be able
 to nail 'em when they gave the wrong answers.

Yup. That's how they pass lie detector tests as well.

They sincerely believe the garbage they spread around.

 

In 1994 I was literally sold into servitude in Saudi Arabia as a sort of
slave programmer (In COBOL on HP-3000 computers) to the Royal Saudi Air
Force. I managed to escape that situation with the help of the same
Wahhabist Sunni Muslims that are now causing so many problems. With that
background, I think I understand them better than most people.

 

As in all other societies, they are not given the whole truth, e.g. most
have never heard of the slaughter at Medina, and believe that Mohamed never
hurt anyone at all.

 

My hope and expectation is that, by allowing people to research various
issues as they work on their test, that a LOT of people who might otherwise
fail the test will instead reevaluate their beliefs, at least enough to come
up with the right answers, whether or not they truly believe them. At least
that level of understanding assures that they can carry on a reasoned
conversation. This is a MAJOR problem now. Even here on this forum, many
people still don't get reverse reductio ad absurdum.

 

BTW, I place most of the blame for the middle east impasse on the West
rather than on the East. The Koran says that most of the evil in the world
is done by people who think they are doing good, which brings with it a good
social mandate to publicly reconsider and defend any actions that others
claim to be evil. The next step is to proclaim evil doers as unwitting
agents of Satan. If there is still no good defense, then they drop the
unwitting. Of course, us stupid uncivilized Westerners have fallen into
this, and so 19 brave men sacrificed their lives just to get our attention,
but even that failed to work as planned. Just what DOES it take to get our
attention - a nuke in NYC? What the West has failed to realize is that they
are playing a losing hand, but nonetheless, they just keep increasing the
bet on the expectation that the other side will fold. They won't. I was as
much intending my test for the sort of stupidity that nearly all Americans
harbor as that carried by Al Queda. Neither side seems to be playing with a
full deck.

 

Steve Richfield

 

  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
9 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Matt Mahoney
Steve, what is the purpose of your political litmus test? If you are trying to 
assemble a team of seed-AI programmers with the correct ethics, forget it. 
Seed AI is a myth.
http://www.mattmahoney.net/agi2.html (section 2).

-- Matt Mahoney, [EMAIL PROTECTED]

--- On Tue, 11/18/08, Steve Richfield [EMAIL PROTECTED] wrote:
From: Steve Richfield [EMAIL PROTECTED]
Subject: Re: [agi] My prospective plan to neutralize AGI and other dangerous 
technologies...
To: agi@v2.listbox.com
Date: Tuesday, November 18, 2008, 6:39 PM

Richard and Bill,


On 11/18/08, BillK [EMAIL PROTECTED] wrote: 
On Tue, Nov 18, 2008 at 1:22 PM, Richard Loosemore wrote:
 I see how this would work:  crazy people never tell lies, so you'd be able

 to nail 'em when they gave the wrong answers.

Yup. That's how they pass lie detector tests as well.

They sincerely believe the garbage they spread around.

 
In 1994 I was literally sold into servitude in Saudi Arabia as a sort of slave 
programmer (In COBOL on HP-3000 computers) to the Royal Saudi Air Force. I 
managed to escape that situation with the help of the same Wahhabist Sunni 
Muslims that are now causing so many problems. With that background, I think I 
understand them better than most people.

 
As in all other societies, they are not given the whole truth, e.g. most have 
never heard of the slaughter at Medina, and believe that Mohamed never hurt 
anyone at all.
 
My hope and expectation is that, by allowing people to research various issues 
as they work on their test, that a LOT of people who might otherwise fail the 
test will instead reevaluate their beliefs, at least enough to come up with the 
right answers, whether or not they truly believe them. At least that level of 
understanding assures that they can carry on a reasoned conversation. This is a 
MAJOR problem now. Even here on this forum, many people still don't get reverse 
reductio ad absurdum.

 
BTW, I place most of the blame for the middle east impasse on the West rather 
than on the East. The Koran says that most of the evil in the world is done by 
people who think they are doing good, which brings with it a good social 
mandate to publicly reconsider and defend any actions that others claim to be 
evil. The next step is to proclaim evil doers as unwitting agents of Satan. 
If there is still no good defense, then they drop the unwitting. Of course, 
us stupid uncivilized Westerners have fallen into this, and so 19 brave 
men sacrificed their lives just to get our attention, but even that failed to 
work as planned. Just what DOES it take to get our attention - a nuke in NYC? 
What the West has failed to realize is that they are playing a losing hand, but 
nonetheless, they just keep increasing the bet on the expectation that the 
other side will fold. They won't. I was as much intending my test for the sort 
of stupidity that nearly all Americans
 harbor as that carried by Al Queda. Neither side seems to be playing with a 
full deck.

 
Steve Richfield
 




  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: **SPAM** Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Mark Waser
 Seed AI is a myth.

Ah.  Now I get it.  You are on this list solely to try to slow down progress as 
much as possible . . . . (sorry that I've been so slow to realize this)

add-rule kill-file Matt Mahoney
  - Original Message - 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Tuesday, November 18, 2008 8:23 PM
  Subject: **SPAM** Re: [agi] My prospective plan to neutralize AGI and other 
dangerous technologies...


Steve, what is the purpose of your political litmus test? If you are 
trying to assemble a team of seed-AI programmers with the correct ethics, 
forget it. Seed AI is a myth.
http://www.mattmahoney.net/agi2.html (section 2).

-- Matt Mahoney, [EMAIL PROTECTED]

--- On Tue, 11/18/08, Steve Richfield [EMAIL PROTECTED] wrote:

  From: Steve Richfield [EMAIL PROTECTED]
  Subject: Re: [agi] My prospective plan to neutralize AGI and other 
dangerous technologies...
  To: agi@v2.listbox.com
  Date: Tuesday, November 18, 2008, 6:39 PM


  Richard and Bill,


  On 11/18/08, BillK [EMAIL PROTECTED] wrote: 
On Tue, Nov 18, 2008 at 1:22 PM, Richard Loosemore wrote:
 I see how this would work:  crazy people never tell lies, so 
you'd be able
 to nail 'em when they gave the wrong answers.

Yup. That's how they pass lie detector tests as well.

They sincerely believe the garbage they spread around.

  In 1994 I was literally sold into servitude in Saudi Arabia as a sort 
of slave programmer (In COBOL on HP-3000 computers) to the Royal Saudi Air 
Force. I managed to escape that situation with the help of the same Wahhabist 
Sunni Muslims that are now causing so many problems. With that background, I 
think I understand them better than most people.

  As in all other societies, they are not given the whole truth, e.g. 
most have never heard of the slaughter at Medina, and believe that Mohamed 
never hurt anyone at all.

  My hope and expectation is that, by allowing people to research 
various issues as they work on their test, that a LOT of people who might 
otherwise fail the test will instead reevaluate their beliefs, at least enough 
to come up with the right answers, whether or not they truly believe them. At 
least that level of understanding assures that they can carry on a reasoned 
conversation. This is a MAJOR problem now. Even here on this forum, many people 
still don't get reverse reductio ad absurdum.

  BTW, I place most of the blame for the middle east impasse on the 
West rather than on the East. The Koran says that most of the evil in the world 
is done by people who think they are doing good, which brings with it a good 
social mandate to publicly reconsider and defend any actions that others claim 
to be evil. The next step is to proclaim evil doers as unwitting agents of 
Satan. If there is still no good defense, then they drop the unwitting. Of 
course, us stupid uncivilized Westerners have fallen into this, and so 19 brave 
men sacrificed their lives just to get our attention, but even that failed to 
work as planned. Just what DOES it take to get our attention - a nuke in NYC? 
What the West has failed to realize is that they are playing a losing hand, but 
nonetheless, they just keep increasing the bet on the expectation that the 
other side will fold. They won't. I was as much intending my test for the sort 
of stupidity that nearly all Americans harbor as that carried by Al Queda. 
Neither side seems to be playing with a full deck.

  Steve Richfield


--
agi | Archives  | Modify Your Subscription  
   


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: **SPAM** Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
Matt and Mark,

I think you both missed my point, but in different ways, namely, that there
is a LOT of traffic here on this forum over a problem that appears easy to
resolve once and for all time, and further, that the solution may work for
much more important worldwide social problems.

Continuing with responses to specific points...

On 11/18/08, Mark Waser [EMAIL PROTECTED] wrote:

   Seed AI is a myth.
 Ah.  Now I get it.  You are on this list solely to try to slow down
 progress as much as possible . . . . (sorry that I've been so slow to
 realize this)


No. Like you, we are all trying to put this OT issue out of our misery. I do
appreciate Matt's efforts, misguided though they may be.

Continuing with Matt's comments...

  *From:* Matt Mahoney [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
 *Sent:* Tuesday, November 18, 2008 8:23 PM
 *Subject:* **SPAM** Re: [agi] My prospective plan to neutralize AGI and
 other dangerous technologies...


   Steve, what is the purpose of your political litmus test?



 I had no intention at all of imposing any sort of political test, beyond
simply looking for some assurance that they weren't about to use the
technology to kill anyone who wasn't in desperate need of being killed.

   If you are trying to assemble a team of seed-AI programmers with the
 correct ethics, forget it. Seed AI is a myth.



 I agree, though my reasoning may be a bit different than yours. Why would
any thinking machine ever want to produce a better thinking machine?
Besides, I can take bright but long-term low-temp people like Loosemore, who
appears to be an absolutely perfect candidate, and make them super-human
intelligent by simply removing the impairment that they have learned to live
with. In Loosemore's case, this is probably the equivalent of several
alcoholic drinks, yet he is pretty bright even with that impairment. I would
ask you to imagine what he would be without that impairment, but it may
well be beyond anyone here's ability to imagine, and well on the way to a
seed, though I suspect that with much more intelligence than he already
has, that he would question that goal.

Thanks everyone for your comments.

Steve Richfield
=

   --- On *Tue, 11/18/08, Steve Richfield [EMAIL PROTECTED]*wrote:

 From: Steve Richfield [EMAIL PROTECTED]
 Subject: Re: [agi] My prospective plan to neutralize AGI and other
 dangerous technologies...
 To: agi@v2.listbox.com
 Date: Tuesday, November 18, 2008, 6:39 PM

 Richard and Bill,

 On 11/18/08, BillK [EMAIL PROTECTED] wrote:

 On Tue, Nov 18, 2008 at 1:22 PM, Richard Loosemore wrote:
  I see how this would work:  crazy people never tell lies, so you'd be
 able
  to nail 'em when they gave the wrong answers.

 Yup. That's how they pass lie detector tests as well.

 They sincerely believe the garbage they spread around.


 In 1994 I was literally sold into servitude in Saudi Arabia as a sort of
 slave programmer (In COBOL on HP-3000 computers) to the Royal Saudi Air
 Force. I managed to escape that situation with the help of the same
 Wahhabist Sunni Muslims that are now causing so many problems. With that
 background, I think I understand them better than most people.

 As in all other societies, they are not given the whole truth, e.g. most
 have never heard of the slaughter at Medina, and believe that Mohamed never
 hurt anyone at all.

 My hope and expectation is that, by allowing people to research various
 issues as they work on their test, that a LOT of people who might otherwise
 fail the test will instead reevaluate their beliefs, at least enough to come
 up with the right answers, whether or not they truly believe them. At least
 that level of understanding assures that they can carry on a reasoned
 conversation. This is a MAJOR problem now. Even here on this forum, many
 people still don't get *reverse* reductio ad absurdum.

 BTW, I place most of the blame for the middle east impasse on the West
 rather than on the East. The Koran says that most of the evil in the world
 is done by people who think they are doing good, which brings with it a good
 social mandate to publicly reconsider and defend any actions that others
 claim to be evil. The next step is to proclaim evil doers as unwitting
 agents of Satan. If there is still no good defense, then they drop the
 unwitting. Of course, us stupid uncivilized Westerners have fallen into
 this, and so 19 brave men sacrificed their lives just to get our attention,
 but even that failed to work as planned. Just what DOES it take to get our
 attention - a nuke in NYC? What the West has failed to realize is that they
 are playing a losing hand, but nonetheless, they just keep increasing the
 bet on the expectation that the other side will fold. They won't. I was as
 much intending my test for the sort of stupidity that nearly all Americans
 harbor as that carried by Al Queda. Neither side seems to be playing with a
 full deck.

 Steve Richfield