Re: [agi] AGI Alife

2010-07-28 Thread Ian Parker
On 27 July 2010 21:06, Jan Klauck jkla...@uni-osnabrueck.de wrote: Second observation about societal punishment eliminating free loaders. The fact of the matter is that *freeloading* is less of a problem in advanced societies than misplaced unselfishness. Fact of the matter, hm?

Re: [agi] AGI Alife

2010-07-28 Thread Ian Parker
One last point. You say freeloading can cause o society to disintegrate. One society that has come pretty damn close to disintegration is Iraq. The deaths in Iraq were very much due to sectarian blood letting. Unselfishness if you like. Would that the Iraqis (and Afghans) were more selfish. -

Re: [agi] Clues to the Mind: Learning Ability

2010-07-28 Thread David Jones
:) Intelligence isn't limited to higher cognitive functions. One could say a virus is intelligent or alive because it can replicate itself. Intelligence is not just one function or ability, it can be many different things. But mostly, for us, it comes down to what the system can accomplish for

Re: [agi] AGI Alife

2010-07-28 Thread Matt Mahoney
Ian Parker wrote: Matt Mahoney has costed his view of AGI. I say that costs must be recoverable as we go along. Matt, don't frighten people with a high estimate of cost. Frighten people instead with the bill they are paying now for dumb systems. It is not my intent to scare people out of

Re: [agi] AGI Alife

2010-07-28 Thread Jan Klauck
Ian Parker wrote There are the military costs, Do you realize that you often narrow a discussion down to military issues of the Iraq/Afghanistan theater? Freeloading in social simulation isn't about guys using a plane for free. When you analyse or design a system you look for holes in the

Re: [agi] Tweaking a few parameters

2010-07-28 Thread Jan Klauck
A. T. Murray wrote Robot: I AM ANDRU Robot: I AM ANDRU Robot: ANDRU HELPS KIDS Robot: KIDS MAKE ROBOTS Robot: ROBOTS NEED ME Robot: I IS I Robot: I AM ANDRU Robot: ANDRU HELPS KIDS Robot: KIDS MAKE ROBOTS For the first time in our dozen-plus years of developing MindForth, the

Re: [agi] AGI Alife

2010-07-28 Thread Ian Parker
Unselfishness gone wrong is a symptom. I think that this and all the other examples should be cautionary for anyone who follows the biological model. Do we want a system that thinks the way we do. Hell no! What we would want in a *friendly* system would be a set of utilitarian axioms. That would

Re: [agi] AGI Alife

2010-07-28 Thread Jan Klauck
Ian Parker wrote What we would want in a *friendly* system would be a set of utilitarian axioms. If we program a machine for winning a war, we must think well what we mean by winning. (Norbert Wiener, Cybernetics, 1948) It is also important that AGI is fully axiomatic and proves that 1+1=2

Re: [agi] AGI Alife

2010-07-28 Thread Ian Parker
On 28 July 2010 19:56, Jan Klauck jkla...@uni-osnabrueck.de wrote: Ian Parker wrote What we would want in a *friendly* system would be a set of utilitarian axioms. If we program a machine for winning a war, we must think well what we mean by winning. I wasn't thinking about winning a

Re: [agi] AGI Alife

2010-07-28 Thread Jan Klauck
Ian Parker wrote If we program a machine for winning a war, we must think well what we mean by winning. I wasn't thinking about winning a war, I was much more thinking about sexual morality and men kissing. If we program a machine for doing X, we must think well what we mean by X. Now