Unselfishness gone wrong is a symptom. I think that this and all the other
examples should be cautionary for anyone who follows the biological model.
Do we want a system that thinks the way we do. Hell no! What we would want
in a "*friendly"* system would be a set of utilitarian axioms. That would
immediately make it think differently from us.

We certainly would not want a system which would arrest men kissing on a
park bench. In other words we would not want a system which was
axiomatically righteous. It is also important that AGI is fully axiomatic
and proves that 1+1=2 by set theory, as Russell did. This immediately takes
it out of the biological sphere.

We will need morality to be axiomatically defined.

Unselfishness going wrong is in fact a frightening thought. It would in AGI
be a symptom of incompatible axioms. In humans it is a real problem and it
should tell us that AGI cannot and should not be biologically based.

On 28 July 2010 15:59, Jan Klauck <jkla...@uni-osnabrueck.de> wrote:

> Ian Parker wrote
>
> > There are the military costs,
>
> Do you realize that you often narrow a discussion down to military
> issues of the Iraq/Afghanistan theater?
>
> Freeloading in social simulation isn't about guys using a plane for
> free. When you analyse or design a system you look for holes in the
> system that allow people to exploit it. In complex systems that happens
> often. Most freeloading isn't much of a problem, just friction, but
> some have the power to damage the system too much. You have that in
> the health system, social welfare, subsidies and funding, the usual
> moral hazard issues in administration, services a.s.o.


> To come back to AGI: when you hope to design, say, a network of
> heterogenous neurons (taking Linas' example) you should be interested
> in excluding mechanisms that allow certain neurons to consume resources
> without delivering something in return because of the way resource
> allocation is organized. These freeloading neurons could go undetected
> for a while but when you scale the network up or confront it with novel
> inputs they could make it run slow or even break it.
>

In point of fact we can look at this another way. Lets dig a little bit
deeper<http://sites.google.com/site/aitranslationproject/computergobbledegook>.
If we have one AGI system we can have 2 (or 3 even, automatic landing in fog
is a triplex system). Suppose system A is monitoring system B. If system Bs
resources are being used up A can shut down processes in A. I talked about
computer gobledegook. I also have the feeling that with AGI we should be
able to get intelligible advice (in NL) about what was going wrong. For this
reason it would not be possible to overload AGI.

I have the feeling that perhaps one aim in AGI should be user friendly
systems. One product is in fact a form filler.

As far as society i concerned I think this all depends on how resource
limited we are. In a resource limited society freeloading is the biggest
issue. In our society violence in all its forms is the big issue. One need
not go to Iraq or Afghanistan for examples. There are plenty in ordinary
crime. "Happy" slapping, domestic violence, violence against children.

If the people who wrote computer viruses stole a large sum of money, what
they did would, to me at any rate, be more forgiveable. People take a
delight in wrecking things for other people, while not stealing very much
themselves. Iraq, Afghanistan and suicide murder is really simply an extreme
example of this. Why I come back to it is that the people feel they are
doing Allah's will. Happy slappers usually say they have nothing better to
do.

The fundamental fact about Western crime is that very little of it is to do
with personal gain or greed.

>
> > If someone were to come
> > along in the guise of social simulation and offer a reduction in
> > these costs the research would pay for itself many times over.
>
> SocSim research into "peace and conflict studies" isn't new. And
> some people in the community work on the Iraq/Afghanistan issue (for
> the US).
>
> > That is the way things should be done. I agree absolutely. We could in
> > fact
> > take steepest descent (Calculus) and GAs and combine them together in a
> > single composite program. This would in fact be quite a useful exercise.
>
> Just a note: Social simulation is not so much about GAs. You use
> agent systems and equation systems. Often you mix both in that you
> define the agent's behavior and the environment via equations, let
> the sim run and then describe the results in statistical terms or
> with curve fitting in equations again.
>
> > One last point. You say freeloading can cause o society to disintegrate.
> > One
> > society that has come pretty damn close to disintegration is Iraq.
> > The deaths in Iraq were very much due to sectarian blood letting.
> > Unselfishness if you like.
>
> Unselfishness gone wrong is a symptom, not a cause. The causes for
> failed states are different.
>

Axiomatic contradiction. Cannot occur in a mathematical system.


  - Ian Parker

>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to