Re: [agi] AGI Alife

2010-07-28 Thread Ian Parker
On 27 July 2010 21:06, Jan Klauck jkla...@uni-osnabrueck.de wrote:


  Second observation about societal punishment eliminating free loaders.
 The
  fact of the matter is that *freeloading* is less of a problem in
  advanced societies than misplaced unselfishness.

 Fact of the matter, hm? Freeloading is an inherent problem in many
 social configurations. 9/11 brought down two towers, freeloading can
 bring down an entire country.

 There are very considerable knock on costs. There is the mushrooming cost
of security  This manifests itself in many ways. There is the cost of
disruption to air travel. If someone rides on a plane without a ticket no
one's life is put at risk. There are the military costs, it costs $1m per
year to keep a soldier in Afghanistan. I don't know how much a Taliban
fighter costs, but it must be a lot less.

Clearly any reduction in these costs would be welcomed. If someone were to
come along in the guise of social simulation and offer a reduction in these
costs the research would pay for itself many times over. What *you* are
interested in.

This may be a somewhat unpopular thing to say, but money *is* important.
Matt Mahoney has costed his view of AGI. I say that costs must be
recoverable as we go along. Matt, don't frighten people with a high estimate
of cost. Frighten people instead with the bill they are paying now for dumb
systems.


  simulations seem :-
 
  1) To be better done by Calculus.

 You usually use both, equations and heuristics. It depends on the
 problem, your resources, your questions, the people working with it
 a.s.o.


That is the way things should be done. I agree absolutely. We could in fact
take steepest descent (Calculus) and GAs and combine them together in a
single composite program. This would in fact be quite a useful exercise. We
would also eliminate genes that simply dealt with Calculus and steepest
descent.

I don't know whether it is useful to think in topological terms.


  - Ian Parker




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-28 Thread Ian Parker
One last point. You say freeloading can cause o society to disintegrate. One
society that has come pretty damn close to disintegration is Iraq.
The deaths in Iraq were very much due to sectarian blood letting.
Unselfishness if you like.

Would that the Iraqis (and Afghans) were more selfish.


  - Ian Parker



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Clues to the Mind: Learning Ability

2010-07-28 Thread David Jones
:) Intelligence isn't limited to higher cognitive functions. One could say
a virus is intelligent or alive because it can replicate itself.

Intelligence is not just one function or ability, it can be many different
things. But mostly, for us, it comes down to what the system can accomplish
for us.

As for the turing test, it is basically worthless in my opinion.

PS: you probably should post these video posts to a single thread...

Dave

On Wed, Jul 28, 2010 at 12:39 AM, deepakjnath deepakjn...@gmail.com wrote:

 http://www.facebook.com/video/video.php?v=287151911466

 See how the parrot can learn so much! Does that mean that the parrot does
 intelligence. Will this parrot pass the turing test?

 There must be a learning center in the brain which is much lower than the
 higher cognitive fucntions like imagination and thoughts.


 cheers,
 Deepak
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-28 Thread Matt Mahoney
Ian Parker wrote:
 Matt Mahoney has costed his view of AGI. I say that costs must be recoverable 
as we go along. Matt, don't frighten people with a high estimate of cost. 
Frighten people instead with the bill they are paying now for dumb systems.

It is not my intent to scare people out of building AGI, but rather to be 
realistic about its costs. Building machines that do what we want is a much 
harder problem than building intelligent machines. Machines surpassed human 
intelligence 50 years ago. But getting them to do useful work is still a $60 
trillion per year problem. It's going to happen, but not as quickly as one 
might 
hope.

 -- Matt Mahoney, matmaho...@yahoo.com





From: Ian Parker ianpark...@gmail.com
To: agi agi@v2.listbox.com
Sent: Wed, July 28, 2010 6:54:05 AM
Subject: Re: [agi] AGI  Alife




On 27 July 2010 21:06, Jan Klauck jkla...@uni-osnabrueck.de wrote:


 Second observation about societal punishment eliminating free loaders. The
 fact of the matter is that *freeloading* is less of a problem in
 advanced societies than misplaced unselfishness.

Fact of the matter, hm? Freeloading is an inherent problem in many
social configurations. 9/11 brought down two towers, freeloading can
bring down an entire country.



There are very considerable knock on costs. There is the mushrooming cost of 
security  This manifests itself in many ways. There is the cost of disruption 
to 
air travel. If someone rides on a plane without a ticket no one's life is put 
at 
risk. There are the military costs, it costs $1m per year to keep a soldier in 
Afghanistan. I don't know how much a Taliban fighter costs, but it must be a 
lot 
less.

Clearly any reduction in these costs would be welcomed. If someone were to come 
along in the guise of social simulation and offer a reduction in these costs 
the 
research would pay for itself many times over. What you are interested in.

This may be a somewhat unpopular thing to say, but money is important. Matt 
Mahoney has costed his view of AGI. I say that costs must be recoverable as we 
go along. Matt, don't frighten people with a high estimate of cost. Frighten 
people instead with the bill they are paying now for dumb systems.
 
 simulations seem :-

 1) To be better done by Calculus.

You usually use both, equations and heuristics. It depends on the
problem, your resources, your questions, the people working with it
a.s.o.


That is the way things should be done. I agree absolutely. We could in fact 
take 
steepest descent (Calculus) and GAs and combine them together in a single 
composite program. This would in fact be quite a useful exercise. We would also 
eliminate genes that simply dealt with Calculus and steepest descent.

I don't know whether it is useful to think in topological terms.


  - Ian Parker
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com


agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-28 Thread Jan Klauck
Ian Parker wrote

 There are the military costs,

Do you realize that you often narrow a discussion down to military
issues of the Iraq/Afghanistan theater?

Freeloading in social simulation isn't about guys using a plane for
free. When you analyse or design a system you look for holes in the
system that allow people to exploit it. In complex systems that happens
often. Most freeloading isn't much of a problem, just friction, but
some have the power to damage the system too much. You have that in
the health system, social welfare, subsidies and funding, the usual
moral hazard issues in administration, services a.s.o.

To come back to AGI: when you hope to design, say, a network of
heterogenous neurons (taking Linas' example) you should be interested
in excluding mechanisms that allow certain neurons to consume resources
without delivering something in return because of the way resource
allocation is organized. These freeloading neurons could go undetected
for a while but when you scale the network up or confront it with novel
inputs they could make it run slow or even break it.

 If someone were to come
 along in the guise of social simulation and offer a reduction in
 these costs the research would pay for itself many times over.

SocSim research into peace and conflict studies isn't new. And
some people in the community work on the Iraq/Afghanistan issue (for
the US).

 That is the way things should be done. I agree absolutely. We could in
 fact
 take steepest descent (Calculus) and GAs and combine them together in a
 single composite program. This would in fact be quite a useful exercise.

Just a note: Social simulation is not so much about GAs. You use
agent systems and equation systems. Often you mix both in that you
define the agent's behavior and the environment via equations, let
the sim run and then describe the results in statistical terms or
with curve fitting in equations again.

 One last point. You say freeloading can cause o society to disintegrate.
 One
 society that has come pretty damn close to disintegration is Iraq.
 The deaths in Iraq were very much due to sectarian blood letting.
 Unselfishness if you like.

Unselfishness gone wrong is a symptom, not a cause. The causes for
failed states are different.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Tweaking a few parameters

2010-07-28 Thread Jan Klauck
A. T. Murray wrote

 Robot:  I AM ANDRU
 Robot:  I AM ANDRU
 Robot:  ANDRU HELPS KIDS
 Robot:  KIDS MAKE ROBOTS
 Robot:  ROBOTS NEED ME
 Robot:  I IS I
 Robot:  I AM ANDRU
 Robot:  ANDRU HELPS KIDS
 Robot:  KIDS MAKE ROBOTS

 For the first time in our dozen-plus years of
 developing MindForth, the AI acts like an
 intelligence struggling to express itself,

An artificial retard?

 We seem to be dealing
 with a true artificial intelligence here.

Definitely.

 Now we
 upload the AI Mind to the World Wide Awakening Web.

Next stop Singularity Station.

:)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-28 Thread Ian Parker
Unselfishness gone wrong is a symptom. I think that this and all the other
examples should be cautionary for anyone who follows the biological model.
Do we want a system that thinks the way we do. Hell no! What we would want
in a *friendly* system would be a set of utilitarian axioms. That would
immediately make it think differently from us.

We certainly would not want a system which would arrest men kissing on a
park bench. In other words we would not want a system which was
axiomatically righteous. It is also important that AGI is fully axiomatic
and proves that 1+1=2 by set theory, as Russell did. This immediately takes
it out of the biological sphere.

We will need morality to be axiomatically defined.

Unselfishness going wrong is in fact a frightening thought. It would in AGI
be a symptom of incompatible axioms. In humans it is a real problem and it
should tell us that AGI cannot and should not be biologically based.

On 28 July 2010 15:59, Jan Klauck jkla...@uni-osnabrueck.de wrote:

 Ian Parker wrote

  There are the military costs,

 Do you realize that you often narrow a discussion down to military
 issues of the Iraq/Afghanistan theater?

 Freeloading in social simulation isn't about guys using a plane for
 free. When you analyse or design a system you look for holes in the
 system that allow people to exploit it. In complex systems that happens
 often. Most freeloading isn't much of a problem, just friction, but
 some have the power to damage the system too much. You have that in
 the health system, social welfare, subsidies and funding, the usual
 moral hazard issues in administration, services a.s.o.


 To come back to AGI: when you hope to design, say, a network of
 heterogenous neurons (taking Linas' example) you should be interested
 in excluding mechanisms that allow certain neurons to consume resources
 without delivering something in return because of the way resource
 allocation is organized. These freeloading neurons could go undetected
 for a while but when you scale the network up or confront it with novel
 inputs they could make it run slow or even break it.


In point of fact we can look at this another way. Lets dig a little bit
deeperhttp://sites.google.com/site/aitranslationproject/computergobbledegook.
If we have one AGI system we can have 2 (or 3 even, automatic landing in fog
is a triplex system). Suppose system A is monitoring system B. If system Bs
resources are being used up A can shut down processes in A. I talked about
computer gobledegook. I also have the feeling that with AGI we should be
able to get intelligible advice (in NL) about what was going wrong. For this
reason it would not be possible to overload AGI.

I have the feeling that perhaps one aim in AGI should be user friendly
systems. One product is in fact a form filler.

As far as society i concerned I think this all depends on how resource
limited we are. In a resource limited society freeloading is the biggest
issue. In our society violence in all its forms is the big issue. One need
not go to Iraq or Afghanistan for examples. There are plenty in ordinary
crime. Happy slapping, domestic violence, violence against children.

If the people who wrote computer viruses stole a large sum of money, what
they did would, to me at any rate, be more forgiveable. People take a
delight in wrecking things for other people, while not stealing very much
themselves. Iraq, Afghanistan and suicide murder is really simply an extreme
example of this. Why I come back to it is that the people feel they are
doing Allah's will. Happy slappers usually say they have nothing better to
do.

The fundamental fact about Western crime is that very little of it is to do
with personal gain or greed.


  If someone were to come
  along in the guise of social simulation and offer a reduction in
  these costs the research would pay for itself many times over.

 SocSim research into peace and conflict studies isn't new. And
 some people in the community work on the Iraq/Afghanistan issue (for
 the US).

  That is the way things should be done. I agree absolutely. We could in
  fact
  take steepest descent (Calculus) and GAs and combine them together in a
  single composite program. This would in fact be quite a useful exercise.

 Just a note: Social simulation is not so much about GAs. You use
 agent systems and equation systems. Often you mix both in that you
 define the agent's behavior and the environment via equations, let
 the sim run and then describe the results in statistical terms or
 with curve fitting in equations again.

  One last point. You say freeloading can cause o society to disintegrate.
  One
  society that has come pretty damn close to disintegration is Iraq.
  The deaths in Iraq were very much due to sectarian blood letting.
  Unselfishness if you like.

 Unselfishness gone wrong is a symptom, not a cause. The causes for
 failed states are different.


Axiomatic contradiction. Cannot occur in a mathematical system.


  - 

Re: [agi] AGI Alife

2010-07-28 Thread Jan Klauck
Ian Parker wrote

 What we would want
 in a *friendly* system would be a set of utilitarian axioms.

If we program a machine for winning a war, we must think well what
we mean by winning.

(Norbert Wiener, Cybernetics, 1948)

 It is also important that AGI is fully axiomatic
 and proves that 1+1=2 by set theory, as Russell did.

Quoting the two important statements from

http://en.wikipedia.org/wiki/Principia_Mathematica#Consistency_and_criticisms

Gödel's first incompleteness theorem showed that Principia could not
be both consistent and complete.

and

Gödel's second incompleteness theorem shows that no formal system
extending basic arithmetic can be used to prove its own consistency.

So in effect your AGI is either crippled but safe or powerful but
potentially behaves different from your axiomatic intentions.

 We will need morality to be axiomatically defined.

As constraints, possibly. But we can only check the AGI in runtime for
certain behaviors (i.e., while it's active), but we can't prove in
advance whether it will break the constraints or not.

Get me right: We can do a lot with such formal specifications and we
should do them where necessary or appropriate, but we have to understand
that our set of guaranteed behavior is a proper subset of the set of
all possible behaviors the AGI can execute. It's heuristics in the end.

 Unselfishness going wrong is in fact a frightening thought. It would in
 AGI be a symptom of incompatible axioms.

Which can happen in a complex system.

 Suppose system A is monitoring system B. If system Bs
 resources are being used up A can shut down processes in A. I talked
 about computer gobledegook. I also have the feeling that with AGI we
 should be able to get intelligible advice (in NL) about what was going
 wrong. For this reason it would not be possible to overload AGI.

This isn't going to guarantee that system A, B, etc. behave in all
ways as intended, except they are all special purpose systems (here:
narrow AI). If A, B etc. are AGIs, then this checking is just an
heuristic, no guarantee or proof.

 In a resource limited society freeloading is the biggest issue.

All societies are and will be constrained by limited resources.

 The fundamental fact about Western crime is that very little of it is
 to do with personal gain or greed.

Not that sure whether this statement is correct. It feels wrong from
what I know about human behavior.

 Unselfishness gone wrong is a symptom, not a cause. The causes for
 failed states are different.

 Axiomatic contradiction. Cannot occur in a mathematical system.

See above...



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-28 Thread Ian Parker
On 28 July 2010 19:56, Jan Klauck jkla...@uni-osnabrueck.de wrote:

 Ian Parker wrote

  What we would want
  in a *friendly* system would be a set of utilitarian axioms.

 If we program a machine for winning a war, we must think well what
 we mean by winning.


I wasn't thinking about winning a war, I was much more thinking about sexual
morality and men kissing.

Winning a war is achieving your political objectives in the war. Simple
definition.


 (Norbert Wiener, Cybernetics, 1948)

  It is also important that AGI is fully axiomatic
  and proves that 1+1=2 by set theory, as Russell did.

 Quoting the two important statements from


 http://en.wikipedia.org/wiki/Principia_Mathematica#Consistency_and_criticisms

 Gödel's first incompleteness theorem showed that Principia could not
 be both consistent and complete.

 and

 Gödel's second incompleteness theorem shows that no formal system
 extending basic arithmetic can be used to prove its own consistency.

 So in effect your AGI is either crippled but safe or powerful but
 potentially behaves different from your axiomatic intentions.


You have to state what your axioms are. Gödel's theorem does indeed state
that. You do have to make therefore some statements which are unprovable.
What I was in fact thinking in terms of was something like Mizar.
Mathematics starts off with simple ideas. The axioms which we cannot prove
should be listed. You can't prove them. Let's list them and all the
assumptions.

If we have a Mizar proof we assume things, and argue the case for a theorem
on what we have assumed. What you should be able to do is get from the ideas
of Russell and Bourbaki to something really meaty like Fermat's Last
Theorem, or the Riemann hypothesis.

The organization of Mizar (and Alcor which is a front end) is very much a
part of AGI. Alcor has in fact to do a similar job to Google in terms of a
search for theorems. Mizar though is different from Google in that we have
lemmas. You prove something by linking the lemmas up.

Suppose I were to search for Riemann Hypothesis. Alcor should give me all
the theorems that depend on it. It should tell me about the density of
primes. It should tell me about the Goldbach conjecture, proved by Hardy and
Littlewood to depend on Riemann.

Google is a step towards AGI. An Alcor which could produce chains
of argument and find lemmas would be a big step to AGI.

Could Mizar contain knowledge which was non mathematical? In a sense it
already can. Mizar will contain Riemanian differential geometry. This is
simply a piece of pure maths. I am allowed to make a conjecture, an axiom if
you like that Riemann's differential geometry is in the shape of General
relativity the way in which the Universe works. I have stated this as
an unproven assertion, one that has been constantly verified experimentally
but unproven in the mathematical universe.


  We will need morality to be axiomatically defined.

 As constraints, possibly. But we can only check the AGI in runtime for
 certain behaviors (i.e., while it's active), but we can't prove in
 advance whether it will break the constraints or not.

 Get me right: We can do a lot with such formal specifications and we
 should do them where necessary or appropriate, but we have to understand
 that our set of guaranteed behavior is a proper subset of the set of
 all possible behaviors the AGI can execute. It's heuristics in the end.


The heuristics could be tested in an off line system.


  Unselfishness going wrong is in fact a frightening thought. It would in
  AGI be a symptom of incompatible axioms.

 Which can happen in a complex system.


Only if the definitions are vague. The definition of happiness is vague.
Better to have a system based on *democracy* in some form or other. The
beauty of Matt's system is that we would remain ultimately in charge of the
system. We make rules such as no imprisonment without trial, minimum of laws
restriction personal freedom (men kissing), separation of powers in the
judiciary and executive and the reolution of disputed without violence.
These are I repeat *not* fundamental philosophical principles but rules
which our civilization has devised and have been found to work.

I have mentioned before that we could have more than 1 AGI system. All the 
*derived* principles would be tested off line on another AGI system.


  Suppose system A is monitoring system B. If system Bs
  resources are being used up A can shut down processes in A. I talked
  about computer gobledegook. I also have the feeling that with AGI we
  should be able to get intelligible advice (in NL) about what was going
  wrong. For this reason it would not be possible to overload AGI.

 This isn't going to guarantee that system A, B, etc. behave in all
 ways as intended, except they are all special purpose systems (here:
 narrow AI). If A, B etc. are AGIs, then this checking is just an
 heuristic, no guarantee or proof.

  In a resource limited society freeloading is the biggest 

Re: [agi] AGI Alife

2010-07-28 Thread Jan Klauck
Ian Parker wrote

 If we program a machine for winning a war, we must think well what
 we mean by winning.

 I wasn't thinking about winning a war, I was much more thinking about
 sexual morality and men kissing.

If we program a machine for doing X, we must think well what we mean
by X.

Now clearer?

 Winning a war is achieving your political objectives in the war. Simple
 definition.

Then define your political objectives. No holes, no ambiguity, no
forgotten cases. Or does the AGI ask for our feedback during mission?
If yes, down to what detail?

 The axioms which we cannot prove
 should be listed. You can't prove them. Let's list them and all the
 assumptions.

And then what? Cripple the AGI by applying just those theorems we can
prove? That excludes of course all those we're uncertain about. And
it's not so much a single theorem that's problematic but a system of
axioms and inference rules that changes its properties when you
modify it or that is incomplete from the beginning.

Example (very plain just to make it clearer what I'm talking about):

The natural numbers N are closed against addition. But N is not
closed against subtraction, since n - m  0 where m  n.

You can prove the theorem that subtracting a positive number from
another number decreases it:

http://us2.metamath.org:88/mpegif/ltsubpos.html

but you can still have a formal system that runs into problems.
In the case of N it's missing closedness, i.e., undefined area.
Now transfer this simple example to formal systems in general.
You have to prove every formal system as it is, not just a single
theorem. The behavior of an AGI isn't a single theorem but a system.

 The heuristics could be tested in an off line system.

Exactly. But by definition heuristics are incomplete, their solution
space is smaller than the set of all solutions. No guarantee for the
optimal solution, just probabilities  1, elaborated hints.

 Unselfishness going wrong is in fact a frightening thought. It would
 in
 AGI be a symptom of incompatible axioms.

 Which can happen in a complex system.

 Only if the definitions are vague.

I bet against this.

 Better to have a system based on *democracy* in some form or other.

The rules you mention are goals and constraints. But they are heuristics
you check during runtime.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com