Re: [agi] AGI Alife

2010-08-06 Thread rob levy
Interesting article:
http://www.newscientist.com/article/mg20727723.700-artificial-life-forms-evolve-basic-intelligence.html?page=1

On Sun, Aug 1, 2010 at 3:13 PM, Jan Klauck jkla...@uni-osnabrueck.dewrote:

 Ian Parker wrote

  I would like your
  opinion on *proofs* which involve an unproven hypothesis,

 I've no elaborated opinion on that.


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-08-06 Thread Ian Parker
This is much more interesting in the context of Evolution than it is for
the creation of AGI. Point is that all the things that have ben done would
have been done (much more simply in fact) from straightforward narrow
programs. However it demonstrates the early multicelluar organisms of the
Pre Cambrian and early Cambrian.

What AGI is interested in is how *language* evolves. That is to say the last
6 million years or so. We also need a process for creating AGI which is
rather more efficient than Evolution. We can't wait that time for something
to happen.


  - Ian Parker

On 6 August 2010 19:23, rob levy r.p.l...@gmail.com wrote:

 Interesting article:
 http://www.newscientist.com/article/mg20727723.700-artificial-life-forms-evolve-basic-intelligence.html?page=1

 On Sun, Aug 1, 2010 at 3:13 PM, Jan Klauck jkla...@uni-osnabrueck.dewrote:

 Ian Parker wrote

  I would like your
  opinion on *proofs* which involve an unproven hypothesis,

 I've no elaborated opinion on that.


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-08-06 Thread Mike Tintner
This is on the surface interesting. But I'm kinda dubious about it. 

I'd like to know exactly what's going on - who or what (what kind of organism) 
is solving what kind of problem about what? The exact nature of the problem and 
the solution, not just a general blurb description.

If you follow the link from Kurzweil, you get a really confusing 
picture/screen. And I wonder whether the real action/problemsolving isn't 
largely taking place in the viewer/programmer's mind.


From: rob levy 
Sent: Friday, August 06, 2010 7:23 PM
To: agi 
Subject: Re: [agi] AGI  Alife


Interesting article: 
http://www.newscientist.com/article/mg20727723.700-artificial-life-forms-evolve-basic-intelligence.html?page=1


On Sun, Aug 1, 2010 at 3:13 PM, Jan Klauck jkla...@uni-osnabrueck.de wrote:

  Ian Parker wrote


   I would like your
   opinion on *proofs* which involve an unproven hypothesis,


  I've no elaborated opinion on that.



  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/

  Modify Your Subscription: https://www.listbox.com/member/?;

  Powered by Listbox: http://www.listbox.com



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-08-01 Thread Jan Klauck
Ian Parker wrote

 I would like your
 opinion on *proofs* which involve an unproven hypothesis,

I've no elaborated opinion on that.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-31 Thread Ian Parker
Adding is simple proving is hard. This is a truism. I would like your
opinion on *proofs* which involve an unproven hypothesis, such as Riemann.
Hardy and Littlewood proved Goldbach with this assumption. Unfortunately the
does not apply. The truth of Goldbach does not imply the Riemann hypothesis.
Riemann would be proved if a converse was valid and the theorem proved
another way.

I am not really arguing deep philosophy, what I am saying is that a
non inscrutable system must go to its basic axioms.


  - Ian Parker

On 31 July 2010 00:25, Jan Klauck jkla...@uni-osnabrueck.de wrote:

 Ian Parker wrote

  Then define your political objectives. No holes, no ambiguity, no
  forgotten cases. Or does the AGI ask for our feedback during mission?
  If yes, down to what detail?
 
  With Matt's ideas it does exactly that.

 How does it know when to ask? You give it rules, but those rules can
 be somehow imperfect. How are its actions monitored and sanctioned?
 And hopefully it's clear that we are now far from mathematical proof.

  No we simply add to the axiom pool.

 Adding is simple, proving is not. Especially when the rules, goals,
 and constraints are not arithmetic but ontological and normative
 statements. Wether by NL or formal system, it's error-prone to
 specify our knowledge of the world (much of it is implicit) and
 teach it to the AGI. It's similar to law which is similar to math
 with referenced axioms and definitions and a substitution process.
 You often find flaws--most are harmless, some are not.

 Proofs give us islands of certainty in an explored sea within the
 ocean of the possible. We end up with heuristics. That's what this
 discussion is about, when I remember right. :)

 cu Jan


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-30 Thread Jan Klauck
Ian Parker wrote

 Then define your political objectives. No holes, no ambiguity, no
 forgotten cases. Or does the AGI ask for our feedback during mission?
 If yes, down to what detail?

 With Matt's ideas it does exactly that.

How does it know when to ask? You give it rules, but those rules can
be somehow imperfect. How are its actions monitored and sanctioned?
And hopefully it's clear that we are now far from mathematical proof.

 No we simply add to the axiom pool.

Adding is simple, proving is not. Especially when the rules, goals,
and constraints are not arithmetic but ontological and normative
statements. Wether by NL or formal system, it's error-prone to
specify our knowledge of the world (much of it is implicit) and
teach it to the AGI. It's similar to law which is similar to math
with referenced axioms and definitions and a substitution process.
You often find flaws--most are harmless, some are not.

Proofs give us islands of certainty in an explored sea within the
ocean of the possible. We end up with heuristics. That's what this
discussion is about, when I remember right. :)

cu Jan


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-29 Thread Ian Parker
On 28 July 2010 23:09, Jan Klauck jkla...@uni-osnabrueck.de wrote:

 Ian Parker wrote

  If we program a machine for winning a war, we must think well what
  we mean by winning.
 
  I wasn't thinking about winning a war, I was much more thinking about
  sexual morality and men kissing.

 If we program a machine for doing X, we must think well what we mean
 by X.

 Now clearer?

  Winning a war is achieving your political objectives in the war. Simple
  definition.

 Then define your political objectives. No holes, no ambiguity, no
 forgotten cases. Or does the AGI ask for our feedback during mission?
 If yes, down to what detail?


With Matt's ideas it does exactly that.


  The axioms which we cannot prove
  should be listed. You can't prove them. Let's list them and all the
  assumptions.

 And then what? Cripple the AGI by applying just those theorems we can
 prove? That excludes of course all those we're uncertain about. And
 it's not so much a single theorem that's problematic but a system of
 axioms and inference rules that changes its properties when you
 modify it or that is incomplete from the beginning.


No we simply add to the axiom pool. *All* I am saying is that we must always
have a lemma train taking us to the most fundamental Suppose I say

W=AσT4

Now I ask the system to prove this. At the bottom of the lemma trail will be
Clifford algebra. This relates Bose Einstein statistics to the spin, in this
case of the photon. It is Quantum Mechanics at a very fundamental level. A
Fermion has a half in its spin.

I can introduce as many axioms as I want. I can say that i = √-1. I can call
this statement an axiom, as a counter example of your natural numbers. In
constructing Clifford Algebra I make a number of statements.

This thinking in terms of axioms I repeat does not limit the power of AGI.
If we have a database you could almost say that a lemma trail was in essence
trivial.

What is does do is invalidate the biological model. *An absolute requirement
for AGI is openness.* In other words we must be able to examine the
arguments and their validity.


 Example (very plain just to make it clearer what I'm talking about):

 The natural numbers N are closed against addition. But N is not
 closed against subtraction, since n - m  0 where m  n.

 You can prove the theorem that subtracting a positive number from
 another number decreases it:

 http://us2.metamath.org:88/mpegif/ltsubpos.html

 but you can still have a formal system that runs into problems.
 In the case of N it's missing closedness, i.e., undefined area.
 Now transfer this simple example to formal systems in general.
 You have to prove every formal system as it is, not just a single
 theorem. The behavior of an AGI isn't a single theorem but a system.

  The heuristics could be tested in an off line system.

 Exactly. But by definition heuristics are incomplete, their solution
 space is smaller than the set of all solutions. No guarantee for the
 optimal solution, just probabilities  1, elaborated hints.

  Unselfishness going wrong is in fact a frightening thought. It would
  in
  AGI be a symptom of incompatible axioms.
 
  Which can happen in a complex system.
 
  Only if the definitions are vague.

 I bet against this.

  Better to have a system based on *democracy* in some form or other.

 The rules you mention are goals and constraints. But they are heuristics
 you check during runtime.


That is true. Also see above. System cannot be inscruitable.


  - Ian Parker




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-28 Thread Ian Parker
On 27 July 2010 21:06, Jan Klauck jkla...@uni-osnabrueck.de wrote:


  Second observation about societal punishment eliminating free loaders.
 The
  fact of the matter is that *freeloading* is less of a problem in
  advanced societies than misplaced unselfishness.

 Fact of the matter, hm? Freeloading is an inherent problem in many
 social configurations. 9/11 brought down two towers, freeloading can
 bring down an entire country.

 There are very considerable knock on costs. There is the mushrooming cost
of security  This manifests itself in many ways. There is the cost of
disruption to air travel. If someone rides on a plane without a ticket no
one's life is put at risk. There are the military costs, it costs $1m per
year to keep a soldier in Afghanistan. I don't know how much a Taliban
fighter costs, but it must be a lot less.

Clearly any reduction in these costs would be welcomed. If someone were to
come along in the guise of social simulation and offer a reduction in these
costs the research would pay for itself many times over. What *you* are
interested in.

This may be a somewhat unpopular thing to say, but money *is* important.
Matt Mahoney has costed his view of AGI. I say that costs must be
recoverable as we go along. Matt, don't frighten people with a high estimate
of cost. Frighten people instead with the bill they are paying now for dumb
systems.


  simulations seem :-
 
  1) To be better done by Calculus.

 You usually use both, equations and heuristics. It depends on the
 problem, your resources, your questions, the people working with it
 a.s.o.


That is the way things should be done. I agree absolutely. We could in fact
take steepest descent (Calculus) and GAs and combine them together in a
single composite program. This would in fact be quite a useful exercise. We
would also eliminate genes that simply dealt with Calculus and steepest
descent.

I don't know whether it is useful to think in topological terms.


  - Ian Parker




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-28 Thread Ian Parker
One last point. You say freeloading can cause o society to disintegrate. One
society that has come pretty damn close to disintegration is Iraq.
The deaths in Iraq were very much due to sectarian blood letting.
Unselfishness if you like.

Would that the Iraqis (and Afghans) were more selfish.


  - Ian Parker



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-28 Thread Matt Mahoney
Ian Parker wrote:
 Matt Mahoney has costed his view of AGI. I say that costs must be recoverable 
as we go along. Matt, don't frighten people with a high estimate of cost. 
Frighten people instead with the bill they are paying now for dumb systems.

It is not my intent to scare people out of building AGI, but rather to be 
realistic about its costs. Building machines that do what we want is a much 
harder problem than building intelligent machines. Machines surpassed human 
intelligence 50 years ago. But getting them to do useful work is still a $60 
trillion per year problem. It's going to happen, but not as quickly as one 
might 
hope.

 -- Matt Mahoney, matmaho...@yahoo.com





From: Ian Parker ianpark...@gmail.com
To: agi agi@v2.listbox.com
Sent: Wed, July 28, 2010 6:54:05 AM
Subject: Re: [agi] AGI  Alife




On 27 July 2010 21:06, Jan Klauck jkla...@uni-osnabrueck.de wrote:


 Second observation about societal punishment eliminating free loaders. The
 fact of the matter is that *freeloading* is less of a problem in
 advanced societies than misplaced unselfishness.

Fact of the matter, hm? Freeloading is an inherent problem in many
social configurations. 9/11 brought down two towers, freeloading can
bring down an entire country.



There are very considerable knock on costs. There is the mushrooming cost of 
security  This manifests itself in many ways. There is the cost of disruption 
to 
air travel. If someone rides on a plane without a ticket no one's life is put 
at 
risk. There are the military costs, it costs $1m per year to keep a soldier in 
Afghanistan. I don't know how much a Taliban fighter costs, but it must be a 
lot 
less.

Clearly any reduction in these costs would be welcomed. If someone were to come 
along in the guise of social simulation and offer a reduction in these costs 
the 
research would pay for itself many times over. What you are interested in.

This may be a somewhat unpopular thing to say, but money is important. Matt 
Mahoney has costed his view of AGI. I say that costs must be recoverable as we 
go along. Matt, don't frighten people with a high estimate of cost. Frighten 
people instead with the bill they are paying now for dumb systems.
 
 simulations seem :-

 1) To be better done by Calculus.

You usually use both, equations and heuristics. It depends on the
problem, your resources, your questions, the people working with it
a.s.o.


That is the way things should be done. I agree absolutely. We could in fact 
take 
steepest descent (Calculus) and GAs and combine them together in a single 
composite program. This would in fact be quite a useful exercise. We would also 
eliminate genes that simply dealt with Calculus and steepest descent.

I don't know whether it is useful to think in topological terms.


  - Ian Parker
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com


agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-28 Thread Jan Klauck
Ian Parker wrote

 There are the military costs,

Do you realize that you often narrow a discussion down to military
issues of the Iraq/Afghanistan theater?

Freeloading in social simulation isn't about guys using a plane for
free. When you analyse or design a system you look for holes in the
system that allow people to exploit it. In complex systems that happens
often. Most freeloading isn't much of a problem, just friction, but
some have the power to damage the system too much. You have that in
the health system, social welfare, subsidies and funding, the usual
moral hazard issues in administration, services a.s.o.

To come back to AGI: when you hope to design, say, a network of
heterogenous neurons (taking Linas' example) you should be interested
in excluding mechanisms that allow certain neurons to consume resources
without delivering something in return because of the way resource
allocation is organized. These freeloading neurons could go undetected
for a while but when you scale the network up or confront it with novel
inputs they could make it run slow or even break it.

 If someone were to come
 along in the guise of social simulation and offer a reduction in
 these costs the research would pay for itself many times over.

SocSim research into peace and conflict studies isn't new. And
some people in the community work on the Iraq/Afghanistan issue (for
the US).

 That is the way things should be done. I agree absolutely. We could in
 fact
 take steepest descent (Calculus) and GAs and combine them together in a
 single composite program. This would in fact be quite a useful exercise.

Just a note: Social simulation is not so much about GAs. You use
agent systems and equation systems. Often you mix both in that you
define the agent's behavior and the environment via equations, let
the sim run and then describe the results in statistical terms or
with curve fitting in equations again.

 One last point. You say freeloading can cause o society to disintegrate.
 One
 society that has come pretty damn close to disintegration is Iraq.
 The deaths in Iraq were very much due to sectarian blood letting.
 Unselfishness if you like.

Unselfishness gone wrong is a symptom, not a cause. The causes for
failed states are different.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-28 Thread Ian Parker
Unselfishness gone wrong is a symptom. I think that this and all the other
examples should be cautionary for anyone who follows the biological model.
Do we want a system that thinks the way we do. Hell no! What we would want
in a *friendly* system would be a set of utilitarian axioms. That would
immediately make it think differently from us.

We certainly would not want a system which would arrest men kissing on a
park bench. In other words we would not want a system which was
axiomatically righteous. It is also important that AGI is fully axiomatic
and proves that 1+1=2 by set theory, as Russell did. This immediately takes
it out of the biological sphere.

We will need morality to be axiomatically defined.

Unselfishness going wrong is in fact a frightening thought. It would in AGI
be a symptom of incompatible axioms. In humans it is a real problem and it
should tell us that AGI cannot and should not be biologically based.

On 28 July 2010 15:59, Jan Klauck jkla...@uni-osnabrueck.de wrote:

 Ian Parker wrote

  There are the military costs,

 Do you realize that you often narrow a discussion down to military
 issues of the Iraq/Afghanistan theater?

 Freeloading in social simulation isn't about guys using a plane for
 free. When you analyse or design a system you look for holes in the
 system that allow people to exploit it. In complex systems that happens
 often. Most freeloading isn't much of a problem, just friction, but
 some have the power to damage the system too much. You have that in
 the health system, social welfare, subsidies and funding, the usual
 moral hazard issues in administration, services a.s.o.


 To come back to AGI: when you hope to design, say, a network of
 heterogenous neurons (taking Linas' example) you should be interested
 in excluding mechanisms that allow certain neurons to consume resources
 without delivering something in return because of the way resource
 allocation is organized. These freeloading neurons could go undetected
 for a while but when you scale the network up or confront it with novel
 inputs they could make it run slow or even break it.


In point of fact we can look at this another way. Lets dig a little bit
deeperhttp://sites.google.com/site/aitranslationproject/computergobbledegook.
If we have one AGI system we can have 2 (or 3 even, automatic landing in fog
is a triplex system). Suppose system A is monitoring system B. If system Bs
resources are being used up A can shut down processes in A. I talked about
computer gobledegook. I also have the feeling that with AGI we should be
able to get intelligible advice (in NL) about what was going wrong. For this
reason it would not be possible to overload AGI.

I have the feeling that perhaps one aim in AGI should be user friendly
systems. One product is in fact a form filler.

As far as society i concerned I think this all depends on how resource
limited we are. In a resource limited society freeloading is the biggest
issue. In our society violence in all its forms is the big issue. One need
not go to Iraq or Afghanistan for examples. There are plenty in ordinary
crime. Happy slapping, domestic violence, violence against children.

If the people who wrote computer viruses stole a large sum of money, what
they did would, to me at any rate, be more forgiveable. People take a
delight in wrecking things for other people, while not stealing very much
themselves. Iraq, Afghanistan and suicide murder is really simply an extreme
example of this. Why I come back to it is that the people feel they are
doing Allah's will. Happy slappers usually say they have nothing better to
do.

The fundamental fact about Western crime is that very little of it is to do
with personal gain or greed.


  If someone were to come
  along in the guise of social simulation and offer a reduction in
  these costs the research would pay for itself many times over.

 SocSim research into peace and conflict studies isn't new. And
 some people in the community work on the Iraq/Afghanistan issue (for
 the US).

  That is the way things should be done. I agree absolutely. We could in
  fact
  take steepest descent (Calculus) and GAs and combine them together in a
  single composite program. This would in fact be quite a useful exercise.

 Just a note: Social simulation is not so much about GAs. You use
 agent systems and equation systems. Often you mix both in that you
 define the agent's behavior and the environment via equations, let
 the sim run and then describe the results in statistical terms or
 with curve fitting in equations again.

  One last point. You say freeloading can cause o society to disintegrate.
  One
  society that has come pretty damn close to disintegration is Iraq.
  The deaths in Iraq were very much due to sectarian blood letting.
  Unselfishness if you like.

 Unselfishness gone wrong is a symptom, not a cause. The causes for
 failed states are different.


Axiomatic contradiction. Cannot occur in a mathematical system.


  - 

Re: [agi] AGI Alife

2010-07-28 Thread Jan Klauck
Ian Parker wrote

 What we would want
 in a *friendly* system would be a set of utilitarian axioms.

If we program a machine for winning a war, we must think well what
we mean by winning.

(Norbert Wiener, Cybernetics, 1948)

 It is also important that AGI is fully axiomatic
 and proves that 1+1=2 by set theory, as Russell did.

Quoting the two important statements from

http://en.wikipedia.org/wiki/Principia_Mathematica#Consistency_and_criticisms

Gödel's first incompleteness theorem showed that Principia could not
be both consistent and complete.

and

Gödel's second incompleteness theorem shows that no formal system
extending basic arithmetic can be used to prove its own consistency.

So in effect your AGI is either crippled but safe or powerful but
potentially behaves different from your axiomatic intentions.

 We will need morality to be axiomatically defined.

As constraints, possibly. But we can only check the AGI in runtime for
certain behaviors (i.e., while it's active), but we can't prove in
advance whether it will break the constraints or not.

Get me right: We can do a lot with such formal specifications and we
should do them where necessary or appropriate, but we have to understand
that our set of guaranteed behavior is a proper subset of the set of
all possible behaviors the AGI can execute. It's heuristics in the end.

 Unselfishness going wrong is in fact a frightening thought. It would in
 AGI be a symptom of incompatible axioms.

Which can happen in a complex system.

 Suppose system A is monitoring system B. If system Bs
 resources are being used up A can shut down processes in A. I talked
 about computer gobledegook. I also have the feeling that with AGI we
 should be able to get intelligible advice (in NL) about what was going
 wrong. For this reason it would not be possible to overload AGI.

This isn't going to guarantee that system A, B, etc. behave in all
ways as intended, except they are all special purpose systems (here:
narrow AI). If A, B etc. are AGIs, then this checking is just an
heuristic, no guarantee or proof.

 In a resource limited society freeloading is the biggest issue.

All societies are and will be constrained by limited resources.

 The fundamental fact about Western crime is that very little of it is
 to do with personal gain or greed.

Not that sure whether this statement is correct. It feels wrong from
what I know about human behavior.

 Unselfishness gone wrong is a symptom, not a cause. The causes for
 failed states are different.

 Axiomatic contradiction. Cannot occur in a mathematical system.

See above...



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-28 Thread Ian Parker
On 28 July 2010 19:56, Jan Klauck jkla...@uni-osnabrueck.de wrote:

 Ian Parker wrote

  What we would want
  in a *friendly* system would be a set of utilitarian axioms.

 If we program a machine for winning a war, we must think well what
 we mean by winning.


I wasn't thinking about winning a war, I was much more thinking about sexual
morality and men kissing.

Winning a war is achieving your political objectives in the war. Simple
definition.


 (Norbert Wiener, Cybernetics, 1948)

  It is also important that AGI is fully axiomatic
  and proves that 1+1=2 by set theory, as Russell did.

 Quoting the two important statements from


 http://en.wikipedia.org/wiki/Principia_Mathematica#Consistency_and_criticisms

 Gödel's first incompleteness theorem showed that Principia could not
 be both consistent and complete.

 and

 Gödel's second incompleteness theorem shows that no formal system
 extending basic arithmetic can be used to prove its own consistency.

 So in effect your AGI is either crippled but safe or powerful but
 potentially behaves different from your axiomatic intentions.


You have to state what your axioms are. Gödel's theorem does indeed state
that. You do have to make therefore some statements which are unprovable.
What I was in fact thinking in terms of was something like Mizar.
Mathematics starts off with simple ideas. The axioms which we cannot prove
should be listed. You can't prove them. Let's list them and all the
assumptions.

If we have a Mizar proof we assume things, and argue the case for a theorem
on what we have assumed. What you should be able to do is get from the ideas
of Russell and Bourbaki to something really meaty like Fermat's Last
Theorem, or the Riemann hypothesis.

The organization of Mizar (and Alcor which is a front end) is very much a
part of AGI. Alcor has in fact to do a similar job to Google in terms of a
search for theorems. Mizar though is different from Google in that we have
lemmas. You prove something by linking the lemmas up.

Suppose I were to search for Riemann Hypothesis. Alcor should give me all
the theorems that depend on it. It should tell me about the density of
primes. It should tell me about the Goldbach conjecture, proved by Hardy and
Littlewood to depend on Riemann.

Google is a step towards AGI. An Alcor which could produce chains
of argument and find lemmas would be a big step to AGI.

Could Mizar contain knowledge which was non mathematical? In a sense it
already can. Mizar will contain Riemanian differential geometry. This is
simply a piece of pure maths. I am allowed to make a conjecture, an axiom if
you like that Riemann's differential geometry is in the shape of General
relativity the way in which the Universe works. I have stated this as
an unproven assertion, one that has been constantly verified experimentally
but unproven in the mathematical universe.


  We will need morality to be axiomatically defined.

 As constraints, possibly. But we can only check the AGI in runtime for
 certain behaviors (i.e., while it's active), but we can't prove in
 advance whether it will break the constraints or not.

 Get me right: We can do a lot with such formal specifications and we
 should do them where necessary or appropriate, but we have to understand
 that our set of guaranteed behavior is a proper subset of the set of
 all possible behaviors the AGI can execute. It's heuristics in the end.


The heuristics could be tested in an off line system.


  Unselfishness going wrong is in fact a frightening thought. It would in
  AGI be a symptom of incompatible axioms.

 Which can happen in a complex system.


Only if the definitions are vague. The definition of happiness is vague.
Better to have a system based on *democracy* in some form or other. The
beauty of Matt's system is that we would remain ultimately in charge of the
system. We make rules such as no imprisonment without trial, minimum of laws
restriction personal freedom (men kissing), separation of powers in the
judiciary and executive and the reolution of disputed without violence.
These are I repeat *not* fundamental philosophical principles but rules
which our civilization has devised and have been found to work.

I have mentioned before that we could have more than 1 AGI system. All the 
*derived* principles would be tested off line on another AGI system.


  Suppose system A is monitoring system B. If system Bs
  resources are being used up A can shut down processes in A. I talked
  about computer gobledegook. I also have the feeling that with AGI we
  should be able to get intelligible advice (in NL) about what was going
  wrong. For this reason it would not be possible to overload AGI.

 This isn't going to guarantee that system A, B, etc. behave in all
 ways as intended, except they are all special purpose systems (here:
 narrow AI). If A, B etc. are AGIs, then this checking is just an
 heuristic, no guarantee or proof.

  In a resource limited society freeloading is the biggest 

Re: [agi] AGI Alife

2010-07-28 Thread Jan Klauck
Ian Parker wrote

 If we program a machine for winning a war, we must think well what
 we mean by winning.

 I wasn't thinking about winning a war, I was much more thinking about
 sexual morality and men kissing.

If we program a machine for doing X, we must think well what we mean
by X.

Now clearer?

 Winning a war is achieving your political objectives in the war. Simple
 definition.

Then define your political objectives. No holes, no ambiguity, no
forgotten cases. Or does the AGI ask for our feedback during mission?
If yes, down to what detail?

 The axioms which we cannot prove
 should be listed. You can't prove them. Let's list them and all the
 assumptions.

And then what? Cripple the AGI by applying just those theorems we can
prove? That excludes of course all those we're uncertain about. And
it's not so much a single theorem that's problematic but a system of
axioms and inference rules that changes its properties when you
modify it or that is incomplete from the beginning.

Example (very plain just to make it clearer what I'm talking about):

The natural numbers N are closed against addition. But N is not
closed against subtraction, since n - m  0 where m  n.

You can prove the theorem that subtracting a positive number from
another number decreases it:

http://us2.metamath.org:88/mpegif/ltsubpos.html

but you can still have a formal system that runs into problems.
In the case of N it's missing closedness, i.e., undefined area.
Now transfer this simple example to formal systems in general.
You have to prove every formal system as it is, not just a single
theorem. The behavior of an AGI isn't a single theorem but a system.

 The heuristics could be tested in an off line system.

Exactly. But by definition heuristics are incomplete, their solution
space is smaller than the set of all solutions. No guarantee for the
optimal solution, just probabilities  1, elaborated hints.

 Unselfishness going wrong is in fact a frightening thought. It would
 in
 AGI be a symptom of incompatible axioms.

 Which can happen in a complex system.

 Only if the definitions are vague.

I bet against this.

 Better to have a system based on *democracy* in some form or other.

The rules you mention are goals and constraints. But they are heuristics
you check during runtime.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-27 Thread Russell Wallace
I spent a while back in the 90s trying to make AGI and alife converge,
before establishing to my satisfaction the approach is a dead end: we
will never have anywhere near enough computing power to make alife
evolve significant intelligence (the only known success took 4 billion
years on a planetary sized nanocomputer network, after all), even if
we could set up just the right selection pressures, which we can't.

On Tue, Jul 27, 2010 at 4:23 AM, Linas Vepstas linasveps...@gmail.com wrote:
 I saw the following post from Antonio Alberti, on the linked-in
 discussion group:

ALife and AGI

Dear group participants.

The relation among AGI and ALife greatly interests me. However, too few 
recent works try to relate them. For exemple, many papers presented in AGI-09 
(http://agi-conf.org/2009/) are about program learning algorithms (combining 
evolutionary learning and analytical learning). In AGI 2010, virtual pets 
have been presented by Ben Goertzel and are also another topic of this forum. 
There are other approaches in AGI that uses some digital evolutionary 
approach for AGI. For me it is a clear clue that both are related in some 
instance.


By ALife I mean the life-as-it-could-be approach (not simulate, but to use 
digital environment to evolve digital organisms using digital evolution 
(faster than Natural one - see 
http://www.hplusmagazine.com/articles/science/stephen-hawking-%E2%80%9Chumans-have-entered-new-stage-evolution%E2%80%9D).

So, I would like to propose some discussion topics regarding ALIfe and AGI:

1) What is the role of Digital Evolution (and ALife) in the AGI context?

2) Is it possible that some aspects of AGI could self-emerge from the digital 
evolution of intelligent autonomous agents?

3) Is there any research group trying to converge both approaches?

Best Regards,

  and my reply was below:

 For your question 3), I have no idea. For question 1) I can't say I've
 ever heard of anyone talk about this. For question 2), I imagine the
 answer is yes, although the boundaries between what's Alife and
 what's program learning (for example) may be blurry.

 So, imagine, for example, a population of many different species of
 neurons (or should I call them automata? or maybe I should call them
 virtual ants?) Most of the individuals have only a few friends (a
 narrow social circle) -- the friendship relationship can be viewed
 as an axon-dendrite connection -- these friendships are semi-stable;
 they evolve over time, and the type  quality of information exchanged
 in a friendship also varies. Is a social network of friends able to
 solve complex problems? The answer is seemingly yes, if the
 individuals are digital models of neurons. (To carry analogy further:
 different species of individuals would be analogous to different types
 of neurons e.g. purkinje cells vs pyramid cells vs granular vs. motor
 neurons. Individuals from one species may tend to be very gregarious,
 while those from other species might be generally xenophobic. etc.)

 I have no clue if anyone has ever explored genetic algorithms or
 related alife algos, factored together with the individuals being
 involved in a social network (with actual information exchange between
 friends). No clue as to how natural/artificial selection should work.
 Do anti-social individuals have a possibly redeeming role w.r.t. the
 organism as a whole? Do selection pressures on individuals (weak
 individuals are cullled) destroy social networks? Do such networks
 automatically evolve altruism, because a working social network with
 weak, altruistically-supported individuals is better than a shredded,
 dysfunctional social network consisting of only strong individuals?
 Dunno. Seems like there could be many many interesting questions.

 I'd be curious about the answers to Antonio's questions ...

 --linas


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-27 Thread Jan Klauck
Linas Vepstas wrote

First my answers to Antonio:

1) What is the role of Digital Evolution (and ALife) in the AGI context?

The nearest I can come up with is Goertzel's virtual pre-school idea,
where the environment is given and the proto-AGI learns within it.
It's certainly possible to place such a proto-AGI into an evolving
environment. I'm not sure how helpful this is, since now we also need
to make sense of the evolving environment in order to assess what the
agent does.

But that's far from the synthetic life approach, where environment and
agents are usually not that much pre-defined. And from those synth.
approaches I know about, they're mostly concerned with replicating
natural evolution, adaption, self-organization a.s.o. Some look into
the emergence and evolution of cooperation, but that's often very low
level and more interested in general properties; far from AGI.

2) Is it possible that some aspects of AGI could self-emerge from the
 digital evolution of intelligent autonomous agents?

I guess it's possible. But I guess one won't come up with a mechanism
that works in an AGI system but with interesting properties of an AGI
system. Most intelligent agents are faked, not really cognitive or
so. In a simulation you see how agents develop/select strategies and
what works in an (evolutionary) environment. Like (wild idea now) the
ability to assign parts of its cognitive capacity to memory or processing
depending on the environmental context (more memory in unchanging and
more processing in changing environments). Those properties could be
integrated later as a detail of a bigger framework.

3) Is there any research group trying to converge both approaches?

My best ad-hoc idea is to scan through the last year's alife conference
program, look for papers that are promising, contact the authors and
ask whether they are into AGI or know people who are.

http://www.ecal2009.org/documents/ECAL2009_program.pdf

One of the topics was artificial consciousness and I saw several
papers going into this direction, often indirectly. Like the Swarm
Cognition and Artificial Life paper on p.34 or the first poster on
p.47.

Now to Linas' part:

 Seems like there could be many many interesting questions.

Many of these are specialized issues that are researched in alife but
more in social simulation. The Journal of Artificial Societies and
Social Simulation

http://jasss.soc.surrey.ac.uk/JASSS.html

is a good starting point if anyone is interested.

cu Jan


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-27 Thread Ian Parker
I did take a look at the journal. There is one question I have with regard
to the assumptions. Mathematically the number of prisoners in
Prisoner's dilemma cooperating or not reflects the prevalence of
cooperators or non cooperators present. Evolution *should* tend to Von
Neumann's zero sum condition. This is an example of Calculus solving a
problem far neater and more elegantly than GAs which should only be used
where there is no good or obvious Calculus solution. This is my first
observation.

Second observation about societal punishment eliminating free loaders. The
fact of the matter is that *freeloading* is less of a problem in advanced
societies than misplaced unselfishness. The 9/11 hijackers performed the
most unselfish and unfreeloading acts. Hope I am not accused
of glorifying terrorism! How fundamental an issue is this? It is fundamental
in that simulations seem :-

1) To be better done by Calculus.
2) Not to be useful in providing simulations of things we are interested in.

Neither of these two is necessarily the case. We could in fact simulate
opinion formation by social interaction. There there would be no clear cut
Calculus outcome.

The third observation is that Google is itself a GA. It uses popular appeal
in its page ranking systems. This is relevant to Matt's ideas. You can, for
example, string programs or other entities together. Of course to do this
association one needs Natural Language. You will also need NL in stetting up
and describing any process of opinion formation. This is the great unsolved
problem. In fact any system not based on NL, but based on a analogue
response is Calculus describable.


  - Ian Parker

On 27 July 2010 14:00, Jan Klauck jkla...@uni-osnabrueck.de wrote:


  Seems like there could be many many interesting questions.

 Many of these are specialized issues that are researched in alife but
 more in social simulation. The Journal of Artificial Societies and
 Social Simulation

 http://jasss.soc.surrey.ac.uk/JASSS.html

 is a good starting point if anyone is interested.

 cu Jan


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-27 Thread Ian Parker
I think I should say that for a problem to be suitable for GAs the space in
which it is embedded has to be non linear. Otherwise we have an easy
Calculus solution.

http://www.springerlink.com/content/h46r77k291rn/?p=bfaf36a87f704d5cbcb66429f9c8a808pi=0

is described a fair number of such systems.


  - Ian Parker



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-27 Thread Ben Goertzel
Evolving AGI via an Alife approach would be possible, but would likely
take many orders of magnitude more resources than engineering AGI...

I worked on  Alife years ago and became frustrated that the artificial
biology and artificial chemistry one uses is never as fecund as the
real thing  We don't understand which aspects of bio and chem are
really important for the evolution of complex structures.  So,
approaching AGI via Alife just replaces one complex set of confusions
with another ;-) ...

I think that releasing some well-engineered AGI systems in an Alife
type environment, and letting them advance and evolve further, would
be an awesome experiment, though ;)

-- Ben G

On Mon, Jul 26, 2010 at 11:23 PM, Linas Vepstas linasveps...@gmail.com wrote:
 I saw the following post from Antonio Alberti, on the linked-in
 discussion group:

ALife and AGI

Dear group participants.

The relation among AGI and ALife greatly interests me. However, too few 
recent works try to relate them. For exemple, many papers presented in AGI-09 
(http://agi-conf.org/2009/) are about program learning algorithms (combining 
evolutionary learning and analytical learning). In AGI 2010, virtual pets 
have been presented by Ben Goertzel and are also another topic of this forum. 
There are other approaches in AGI that uses some digital evolutionary 
approach for AGI. For me it is a clear clue that both are related in some 
instance.


By ALife I mean the life-as-it-could-be approach (not simulate, but to use 
digital environment to evolve digital organisms using digital evolution 
(faster than Natural one - see 
http://www.hplusmagazine.com/articles/science/stephen-hawking-%E2%80%9Chumans-have-entered-new-stage-evolution%E2%80%9D).

So, I would like to propose some discussion topics regarding ALIfe and AGI:

1) What is the role of Digital Evolution (and ALife) in the AGI context?

2) Is it possible that some aspects of AGI could self-emerge from the digital 
evolution of intelligent autonomous agents?

3) Is there any research group trying to converge both approaches?

Best Regards,

  and my reply was below:

 For your question 3), I have no idea. For question 1) I can't say I've
 ever heard of anyone talk about this. For question 2), I imagine the
 answer is yes, although the boundaries between what's Alife and
 what's program learning (for example) may be blurry.

 So, imagine, for example, a population of many different species of
 neurons (or should I call them automata? or maybe I should call them
 virtual ants?) Most of the individuals have only a few friends (a
 narrow social circle) -- the friendship relationship can be viewed
 as an axon-dendrite connection -- these friendships are semi-stable;
 they evolve over time, and the type  quality of information exchanged
 in a friendship also varies. Is a social network of friends able to
 solve complex problems? The answer is seemingly yes, if the
 individuals are digital models of neurons. (To carry analogy further:
 different species of individuals would be analogous to different types
 of neurons e.g. purkinje cells vs pyramid cells vs granular vs. motor
 neurons. Individuals from one species may tend to be very gregarious,
 while those from other species might be generally xenophobic. etc.)

 I have no clue if anyone has ever explored genetic algorithms or
 related alife algos, factored together with the individuals being
 involved in a social network (with actual information exchange between
 friends). No clue as to how natural/artificial selection should work.
 Do anti-social individuals have a possibly redeeming role w.r.t. the
 organism as a whole? Do selection pressures on individuals (weak
 individuals are cullled) destroy social networks? Do such networks
 automatically evolve altruism, because a working social network with
 weak, altruistically-supported individuals is better than a shredded,
 dysfunctional social network consisting of only strong individuals?
 Dunno. Seems like there could be many many interesting questions.

 I'd be curious about the answers to Antonio's questions ...

 --linas


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if
we are to give everything its due, two times two makes five is
sometimes a very charming thing too. -- Fyodor Dostoevsky


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your 

[agi] AGI Alife

2010-07-26 Thread Linas Vepstas
I saw the following post from Antonio Alberti, on the linked-in
discussion group:

ALife and AGI

Dear group participants.

The relation among AGI and ALife greatly interests me. However, too few recent 
works try to relate them. For exemple, many papers presented in AGI-09 
(http://agi-conf.org/2009/) are about program learning algorithms (combining 
evolutionary learning and analytical learning). In AGI 2010, virtual pets have 
been presented by Ben Goertzel and are also another topic of this forum. There 
are other approaches in AGI that uses some digital evolutionary approach for 
AGI. For me it is a clear clue that both are related in some instance.


By ALife I mean the life-as-it-could-be approach (not simulate, but to use 
digital environment to evolve digital organisms using digital evolution 
(faster than Natural one - see 
http://www.hplusmagazine.com/articles/science/stephen-hawking-%E2%80%9Chumans-have-entered-new-stage-evolution%E2%80%9D).

So, I would like to propose some discussion topics regarding ALIfe and AGI:

1) What is the role of Digital Evolution (and ALife) in the AGI context?

2) Is it possible that some aspects of AGI could self-emerge from the digital 
evolution of intelligent autonomous agents?

3) Is there any research group trying to converge both approaches?

Best Regards,

 and my reply was below:

For your question 3), I have no idea. For question 1) I can't say I've
ever heard of anyone talk about this. For question 2), I imagine the
answer is yes, although the boundaries between what's Alife and
what's program learning (for example) may be blurry.

So, imagine, for example, a population of many different species of
neurons (or should I call them automata? or maybe I should call them
virtual ants?) Most of the individuals have only a few friends (a
narrow social circle) -- the friendship relationship can be viewed
as an axon-dendrite connection -- these friendships are semi-stable;
they evolve over time, and the type  quality of information exchanged
in a friendship also varies. Is a social network of friends able to
solve complex problems? The answer is seemingly yes, if the
individuals are digital models of neurons. (To carry analogy further:
different species of individuals would be analogous to different types
of neurons e.g. purkinje cells vs pyramid cells vs granular vs. motor
neurons. Individuals from one species may tend to be very gregarious,
while those from other species might be generally xenophobic. etc.)

I have no clue if anyone has ever explored genetic algorithms or
related alife algos, factored together with the individuals being
involved in a social network (with actual information exchange between
friends). No clue as to how natural/artificial selection should work.
Do anti-social individuals have a possibly redeeming role w.r.t. the
organism as a whole? Do selection pressures on individuals (weak
individuals are cullled) destroy social networks? Do such networks
automatically evolve altruism, because a working social network with
weak, altruistically-supported individuals is better than a shredded,
dysfunctional social network consisting of only strong individuals?
Dunno. Seems like there could be many many interesting questions.

I'd be curious about the answers to Antonio's questions ...

--linas


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com