On 27 July 2010 21:06, Jan Klauck jkla...@uni-osnabrueck.de wrote:
Second observation about societal punishment eliminating free loaders.
The
fact of the matter is that *freeloading* is less of a problem in
advanced societies than misplaced unselfishness.
Fact of the matter, hm?
One last point. You say freeloading can cause o society to disintegrate. One
society that has come pretty damn close to disintegration is Iraq.
The deaths in Iraq were very much due to sectarian blood letting.
Unselfishness if you like.
Would that the Iraqis (and Afghans) were more selfish.
-
:) Intelligence isn't limited to higher cognitive functions. One could say
a virus is intelligent or alive because it can replicate itself.
Intelligence is not just one function or ability, it can be many different
things. But mostly, for us, it comes down to what the system can accomplish
for
Ian Parker wrote:
Matt Mahoney has costed his view of AGI. I say that costs must be recoverable
as we go along. Matt, don't frighten people with a high estimate of cost.
Frighten people instead with the bill they are paying now for dumb systems.
It is not my intent to scare people out of
Ian Parker wrote
There are the military costs,
Do you realize that you often narrow a discussion down to military
issues of the Iraq/Afghanistan theater?
Freeloading in social simulation isn't about guys using a plane for
free. When you analyse or design a system you look for holes in the
A. T. Murray wrote
Robot: I AM ANDRU
Robot: I AM ANDRU
Robot: ANDRU HELPS KIDS
Robot: KIDS MAKE ROBOTS
Robot: ROBOTS NEED ME
Robot: I IS I
Robot: I AM ANDRU
Robot: ANDRU HELPS KIDS
Robot: KIDS MAKE ROBOTS
For the first time in our dozen-plus years of
developing MindForth, the
Unselfishness gone wrong is a symptom. I think that this and all the other
examples should be cautionary for anyone who follows the biological model.
Do we want a system that thinks the way we do. Hell no! What we would want
in a *friendly* system would be a set of utilitarian axioms. That would
Ian Parker wrote
What we would want
in a *friendly* system would be a set of utilitarian axioms.
If we program a machine for winning a war, we must think well what
we mean by winning.
(Norbert Wiener, Cybernetics, 1948)
It is also important that AGI is fully axiomatic
and proves that 1+1=2
On 28 July 2010 19:56, Jan Klauck jkla...@uni-osnabrueck.de wrote:
Ian Parker wrote
What we would want
in a *friendly* system would be a set of utilitarian axioms.
If we program a machine for winning a war, we must think well what
we mean by winning.
I wasn't thinking about winning a
Ian Parker wrote
If we program a machine for winning a war, we must think well what
we mean by winning.
I wasn't thinking about winning a war, I was much more thinking about
sexual morality and men kissing.
If we program a machine for doing X, we must think well what we mean
by X.
Now
10 matches
Mail list logo