Interesting article:
http://www.newscientist.com/article/mg20727723.700-artificial-life-forms-evolve-basic-intelligence.html?page=1
On Sun, Aug 1, 2010 at 3:13 PM, Jan Klauck jkla...@uni-osnabrueck.dewrote:
Ian Parker wrote
I would like your
opinion on *proofs* which involve an unproven
This is much more interesting in the context of Evolution than it is for
the creation of AGI. Point is that all the things that have ben done would
have been done (much more simply in fact) from straightforward narrow
programs. However it demonstrates the early multicelluar organisms of the
Pre
the link from Kurzweil, you get a really confusing
picture/screen. And I wonder whether the real action/problemsolving isn't
largely taking place in the viewer/programmer's mind.
From: rob levy
Sent: Friday, August 06, 2010 7:23 PM
To: agi
Subject: Re: [agi] AGI Alife
Interesting article
Ian Parker wrote
I would like your
opinion on *proofs* which involve an unproven hypothesis,
I've no elaborated opinion on that.
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Adding is simple proving is hard. This is a truism. I would like your
opinion on *proofs* which involve an unproven hypothesis, such as Riemann.
Hardy and Littlewood proved Goldbach with this assumption. Unfortunately the
does not apply. The truth of Goldbach does not imply the Riemann hypothesis.
Ian Parker wrote
Then define your political objectives. No holes, no ambiguity, no
forgotten cases. Or does the AGI ask for our feedback during mission?
If yes, down to what detail?
With Matt's ideas it does exactly that.
How does it know when to ask? You give it rules, but those rules can
On 28 July 2010 23:09, Jan Klauck jkla...@uni-osnabrueck.de wrote:
Ian Parker wrote
If we program a machine for winning a war, we must think well what
we mean by winning.
I wasn't thinking about winning a war, I was much more thinking about
sexual morality and men kissing.
If we
On 27 July 2010 21:06, Jan Klauck jkla...@uni-osnabrueck.de wrote:
Second observation about societal punishment eliminating free loaders.
The
fact of the matter is that *freeloading* is less of a problem in
advanced societies than misplaced unselfishness.
Fact of the matter, hm?
One last point. You say freeloading can cause o society to disintegrate. One
society that has come pretty damn close to disintegration is Iraq.
The deaths in Iraq were very much due to sectarian blood letting.
Unselfishness if you like.
Would that the Iraqis (and Afghans) were more selfish.
-
to happen, but not as quickly as one
might
hope.
-- Matt Mahoney, matmaho...@yahoo.com
From: Ian Parker ianpark...@gmail.com
To: agi agi@v2.listbox.com
Sent: Wed, July 28, 2010 6:54:05 AM
Subject: Re: [agi] AGI Alife
On 27 July 2010 21:06, Jan Klauck jkla
Ian Parker wrote
There are the military costs,
Do you realize that you often narrow a discussion down to military
issues of the Iraq/Afghanistan theater?
Freeloading in social simulation isn't about guys using a plane for
free. When you analyse or design a system you look for holes in the
Unselfishness gone wrong is a symptom. I think that this and all the other
examples should be cautionary for anyone who follows the biological model.
Do we want a system that thinks the way we do. Hell no! What we would want
in a *friendly* system would be a set of utilitarian axioms. That would
Ian Parker wrote
What we would want
in a *friendly* system would be a set of utilitarian axioms.
If we program a machine for winning a war, we must think well what
we mean by winning.
(Norbert Wiener, Cybernetics, 1948)
It is also important that AGI is fully axiomatic
and proves that 1+1=2
On 28 July 2010 19:56, Jan Klauck jkla...@uni-osnabrueck.de wrote:
Ian Parker wrote
What we would want
in a *friendly* system would be a set of utilitarian axioms.
If we program a machine for winning a war, we must think well what
we mean by winning.
I wasn't thinking about winning a
Ian Parker wrote
If we program a machine for winning a war, we must think well what
we mean by winning.
I wasn't thinking about winning a war, I was much more thinking about
sexual morality and men kissing.
If we program a machine for doing X, we must think well what we mean
by X.
Now
I spent a while back in the 90s trying to make AGI and alife converge,
before establishing to my satisfaction the approach is a dead end: we
will never have anywhere near enough computing power to make alife
evolve significant intelligence (the only known success took 4 billion
years on a
Linas Vepstas wrote
First my answers to Antonio:
1) What is the role of Digital Evolution (and ALife) in the AGI context?
The nearest I can come up with is Goertzel's virtual pre-school idea,
where the environment is given and the proto-AGI learns within it.
It's certainly possible to place
I did take a look at the journal. There is one question I have with regard
to the assumptions. Mathematically the number of prisoners in
Prisoner's dilemma cooperating or not reflects the prevalence of
cooperators or non cooperators present. Evolution *should* tend to Von
Neumann's zero sum
I think I should say that for a problem to be suitable for GAs the space in
which it is embedded has to be non linear. Otherwise we have an easy
Calculus solution.
http://www.springerlink.com/content/h46r77k291rn/?p=bfaf36a87f704d5cbcb66429f9c8a808pi=0
is described a fair number of such systems.
Evolving AGI via an Alife approach would be possible, but would likely
take many orders of magnitude more resources than engineering AGI...
I worked on Alife years ago and became frustrated that the artificial
biology and artificial chemistry one uses is never as fecund as the
real thing We
I saw the following post from Antonio Alberti, on the linked-in
discussion group:
ALife and AGI
Dear group participants.
The relation among AGI and ALife greatly interests me. However, too few recent
works try to relate them. For exemple, many papers presented in AGI-09
21 matches
Mail list logo