Re: [agi] who is going to build the wittgenstein-ian AI filter to spot all the intellectual nonsense

2008-11-26 Thread Tudor Boloni
John, an impressive effort and wonderful direction, an analytical psychology
of the Good in humans is sorely lacking and the focus on human sickness has
had a monopoly for way too long and with untold negative consequences for
society at large.  assuming these meanings are correct (or will be fine
tuned to be so at some point), cannot your coding include classes that would
prohibit improper uses of such terms, kind of a system of rejecting attempts
to mix value judgments and labels?



On Wed, Nov 26, 2008 at 12:29 AM, John LaMuth [EMAIL PROTECTED] wrote:

   Mike

 The abstract nouns Honor. Justice. Truth  can all be shown
 to be objectively based in science of Behaviorism

 http://www.angelfire.com/rnb/fairhaven/behaviorism.html

 as outlined in technically linked schematics

 http://www.angelfire.com/rnb/fairhaven/schematics.html

 and even granted US patent 6587846

 www.ethicalvalues.com

 Just offering up the latest advances...

 Wittgenstein would be proud  ^_^

 Cordially

 John LaMuth

 www.charactervalues.org


   *

 GUILT
 *

 Previously, you (as reinforcer) have leniently acted in a reinforcing
 fashion towards me: overriding my (as procurer) submissive treatment of you.

 But now, I (as personal authority) will *guiltily* act in a submissive
 fashion towards you: overruling your lenient treatment of me.
 *

 BLAME
 *

 Previously, I (as personal authority) have guiltily acted in a submissive
 fashion towards you: overriding your (as reinforcer) lenient treatment of
 me.

 But now, you (as my personal fol-lower) will *blamefully* act in a lenient
 fashion towards me: overruling my (as PA) guilty treatment of you.
  *

 HONOR
 *

 Previously, you (as my personal follower) have blamefully acted in a
 lenient fashion towards me: overriding my (as PA) guilty treatment of you.

 But now, I (as group authority) will *honorably* act in a guilty fashion
 towards you: overruling your (as PF) blameful treatment of me.
 *

 JUSTICE
 *

 Previously, I (as group authority) have honorably acted in a guilty fashion
 towards you: overriding your (as PF) blameful treatment of me.

 But now, you (as group representative) will *justly*-blame me: overruling
 my (as GA) honorable sense of guilt.
  *

 LIBERTY
 *

 Previously, you (as group representative) have justly-blamed me: overriding
 my (as GA) honorable sense of guilt.
 *

 *

 But now, I (as spiritual authority) will honorably act in a *libertarian 
 *fashion
 towards you: overruling your

 just-blaming of me.
 *

 HOPE
 *

 Previously, I (as spiritual authority) have honorably acted in a
 libertarian fashion towards you: overriding your (as GR) just-blaming of me.

 But now, you (as my spiritual disciple) will blamefully-*hope* for
 justice: overruling my (as SA) libertarian sense of honor.
  *

 FREE WILL
 *

 Previously, you (as my spiritual disciple) have blamefully-hoped for
 justice: overriding my (as SA) libertarian sense of honor.

 But now, I (as humanitarian authority) will honorably act in a *freely
 willed* fashion towards you: overruling your (as SD) blameful-hope for
 justice.
 *

 TRUTH
 *

 Previously, I (as humanitarian authority) have honorably acted in a
 freely-willed fashion towards you: overriding your (as SD) blameful hope for
 justice.

 But now, you (as representative member of humanity) will justly-hope for
 the *truth*: overruling my (as HA) libertarian sense of free will.
  *

 EQUALITY
 *

 Previously, you (as representative member of humanity) have justly-hoped
 for the truth: overriding my (as HA) libertarian sense of free will.

 But now, I (as transcendental authority) will freely-willed act in an* **
 egalitarian* fashion towards you: overruling your (as RH) just-hope for
 the truth.
 *

 BLISS
 *

 Previously, I (as transcendental authority) have freely-willed acted in an
 egalitarian fashion towards you: overriding your (as RH) just-hope for the
 truth.

 But now, you (as my transcendental follower) will *blissfully* hope for
 the truth: overruling my (as TA) egalitarian treatment of you.

 .

 - Original Message -
 *From:* Mike Tintner [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
 *Sent:* Tuesday, November 25, 2008 10:39 AM
 *Subject:* Re: [agi] who is going to build the wittgenstein-ian AI filter
 to spot all the intellectual nonsense



 Tudor: I agree that there are many better questions to elucidate the
 tricks/pitfalls of language.  but lets list the biggest time wasters first,

 Er, it's a rather big job. I think you're talking about all abstract nouns.
 Time. Space. Honour. Justice. Truth. Realism Beauty. Science. Art.   You're
 talking IOW about a dimension of language almost as fundamental as adverbs.

 It's worth pursuing the illusions created by the verbal abstractions of
 language and the ways we use them  -  but it's a huge task.

 --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 

[agi] who is going to build the wittgenstein-ian AI filter to spot all the intellectual nonsense

2008-11-25 Thread Tudor Boloni
we invariably generate and then fruitlessly explore (our field is even more
exposed to this than most others) until we come up against the limits of our
own language, and defeated and fatigued realize we never thought the
questions through. i nominate this guy:
http://hyperlogic.blogspot.com/

at a minimum wittgenstein's Brown Book should be required reading for all
AGI list members

t



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] who is going to build the wittgenstein-ian AI filter to spot all the intellectual nonsense

2008-11-25 Thread Tudor Boloni
wrong category is trivial indeed, but quickly removing computing resources
from impossible processes can be a great benefit to any system, and an
incredible benefit if the system learns to spot deeply nonsensical problems
in advance of dedicating almost any resources to it... what if we could
design a system that by its very structuring couldnt even generate these
wittgensteinian deep errors... also, as far it being a cop out, i disagree
it clears the mind to the deepest levels allowing a springwell of clarity
that shows other answers in record time and accuracy, an example: minsky
points to the same stupidity of asking the question of what is
consciousness, preferring to just look for stimuli/behavior rules that are
required to survive and act, and letting others worry about how many of
those rules make up their version of the word conscious...

On Tue, Nov 25, 2008 at 3:46 PM, Richard Loosemore [EMAIL PROTECTED]wrote:

 Tudor Boloni wrote:

 we invariably generate and then fruitlessly explore (our field is even
 more exposed to this than most others) until we come up against the limits
 of our own language, and defeated and fatigued realize we never thought the
 questions through. i nominate this guy:

 http://hyperlogic.blogspot.com/

 at a minimum wittgenstein's Brown Book should be required reading for all
 AGI list members


 Read it.  Along with pretty much everything else he wrote (that is in
 print, anyhow).

 Calling things a category error is a bit of a cop out.




 Richard Loosemore


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] who is going to build the wittgenstein-ian AI filter to spot all the intellectual nonsense

2008-11-25 Thread Tudor Boloni
I agree that there are many better questions to elucidate the
tricks/pitfalls of language.  but lets list the biggest time wasters first,
and the post showed some real time wasters from various fields that i found
valuable to be aware of

 It implies it is pointless to ask what the essence of time is, but then
 proceeds to give an explanation of time that is not pointless, and may shed
 light on its meaning, which is perhaps as much of an essence as time has..

i think the post tries to show that the error is that treating time like an
object of reality with an essence is nonsensical and a waste of time;) it
seems wonderful to have an AGI system answer such a question with time is a
human label of arbitrary length based on conventions among human subgroups


what more needs to be said of time than that it is a label, allowing the
word essence creates a very hard and confusing and pointless internal
'debate' in an AGI, essence means a further compression of data or synopsis
of concept or a deeper fundamental level of truth not its meaning... so i
would be happier hearing time has no essence, time is defined as:



 Similarly, it implies it is pointless to ask what is the nature of
 consciousness, and then gives an explanation, that while not necessarily
 correct, or even close to complete, has some meaning about the nature of
 what we call consciousness.



 same as above... having researchers looking around for something that
doesnt exist is a time waster. having word handles to easily move abstract
concepts about is a productivity enhancer IF and ONLY if communicants share
word definitions. since consciousness the word needs to be defined as how
many simple behaviors are we going to require before we agree to call
something conscious, this defining stage is critical before any use of the
word, so if an AGI is asked the question 'what is consciousness' if would
have to respond that its defined differently by all askers, so it has no
nature, its just a variable that needs to be defined before its use in a
conversation


i guess the key here is that there is an important division between
legitimate language and nonsense, and i never see us try to protect our
systems from being burdened by the nonsense



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] who is going to build the wittgenstein-ian AI filter to spot all the intellectual nonsense

2008-11-25 Thread Tudor Boloni
Richard, please give me a link to the paper or at least the example related
to manipulation of subjective experience in others, i am indeed curious to
see how their approach would fare... thanks for the effort in advance

tudor



  For example, they could not, in principle, answer any questions about the
 practical effects of the various manipulations that I proposed in my recent
 paper.  And yet, it turns out that I can make predictions about how the
 subjective experience of people would be affected by these manipulations:
  pretty good work for something that is labelled by W  M as a non-concept!


 Richard





  On Tue, Nov 25, 2008 at 3:46 PM, Richard Loosemore [EMAIL PROTECTED]mailto:
 [EMAIL PROTECTED] wrote:

Tudor Boloni wrote:

we invariably generate and then fruitlessly explore (our field
is even more exposed to this than most others) until we come up
against the limits of our own language, and defeated and
fatigued realize we never thought the questions through. i
nominate this guy:

http://hyperlogic.blogspot.com/

at a minimum wittgenstein's Brown Book should be required
reading for all AGI list members


Read it.  Along with pretty much everything else he wrote (that is
in print, anyhow).

Calling things a category error is a bit of a cop out.




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com


 
 *agi* | Archives https://www.listbox.com/member/archive/303/=now 
 https://www.listbox.com/member/archive/rss/303/ | Modify 
 https://www.listbox.com/member/?; Your Subscription   [Powered by
 Listbox] http://www.listbox.com




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] who is going to build the wittgenstein-ian AI filter to spot all the intellectual nonsense

2008-11-25 Thread Tudor Boloni
your list is a start to a list of only potentially problematic questions or
constructs, since using these words and concepts is actually going to be
required in any AGI system... a flag list is a start, but a set of rules to
eliminate areas of language construction we do not need to ever worry about
in designing AGI would be the artful goal
On Tue, Nov 25, 2008 at 7:39 PM, Mike Tintner [EMAIL PROTECTED]wrote:



 Tudor: I agree that there are many better questions to elucidate the
 tricks/pitfalls of language.  but lets list the biggest time wasters first,

 Er, it's a rather big job. I think you're talking about all abstract nouns.
 Time. Space. Honour. Justice. Truth. Realism Beauty. Science. Art.   You're
 talking IOW about a dimension of language almost as fundamental as adverbs.

 It's worth pursuing the illusions created by the verbal abstractions of
 language and the ways we use them  -  but it's a huge task.

 --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Compression PLUS a fitness function motoring for hypothesized compressibility is intelligence?

2008-05-31 Thread Tudor Boloni
Jim, these are good points, and seem to be saying that: even with the
perfect metric for intelligence discovered (lets pretend), and a maximally
intelligent program built (keep pretending), that without a value system in
place that selects among future possible actions or internal
tests/experiments to perform and whose outcomes are JUDGED as favorable or
less so, we dont have an AGI of human proportions.  or are you saying
permutations and compression alone would result in a huge database optimally
organized but not even intelligent... what if any question asked of this
program returns all possible answers (including the Japanese MU (rephrase
the question since it assumes untrue concepts)) and the user, based on his
own value system ACTS according to his answers of choice... this to me seems
even more useful than some bigoted program that really acts like one of
us... maybe we should define what AGI goals we are actually working for

tudor

On Sat, May 31, 2008 at 2:36 PM, Jim Bromer [EMAIL PROTECTED] wrote:

 The attempt to create an objective measure or process for intelligence
 seems worthwhile, but the problem here is that in making the attempt to
 eliminate actions and beliefs from the modeling of intelligence one is in
 danger of repeating the serious error of over-simplification as was done,
 for example, when the behaviorists tried to eliminate ideas and reasoning
 from the study of psychology, or when the proponents of the theories of
 logic-based artificial intelligence tried to eliminate other methods of
 reasoning from the scientific retinue on the basis that logic was the only
 truly scientific form of reasoning available.

 The use of a metaphor from the history science is legitimate.  However when
 the metaphor purports to make an overly broad conclusion, especially one
 that is narrowly focused on a system (mathematical celestial orbital
 physics) which has yet to show its efficacy in the field of general
 artificial intelligence, and which the exclusion of other methods of
 reasoning is presented as if it had emerged from some kind of triumph, you
 really have to think before you jump.

 I often argue against things like the simplistic use of Bayesian
 reasoning.  However, when I do make an argument like that, I am not arguing
 against the value of Bayesian reasoning, but against the narrow simplistic
 belief that Bayesian reasoning is itself sufficient to explain human level
 general intelligence.

 Similarly, I am not against the attempts to create objectives measures and
 processes for intelligence, but I am definitely opposed to those arguments
 which make an unsubstantiated claim that a narrow simplistic objective
 method is going to be sufficient when the evidence supporting that
 conclusion is seriously lacking and there are numerous good reasons for
 including other means of reasoning in the design of an AI program.

 Jim Bromer

 - Original Message 
 From: J Storrs Hall, PhD [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Friday, May 30, 2008 11:12:54 AM
 Subject: Re: [agi] Compression PLUS a fitness function motoring for
 hypothesized compressibility is intelligence?

 I don't really have any argument with this, except possibly quibbles about
 the
 nuances of the difference between empirical and empiricism -- and I
 don't
 really care about those!

 On Friday 30 May 2008 05:04:58 am, Tudor Boloni wrote:
  The key point was lost, here is a clearer way of saying it.
 
  Kepler's experience (his empirical work and experimentation with all his
  equipment) IS NOT what helped him DISCOVER properties of gravity (equal
  times for equal areas) (we can agree no one Invented it, though Newton
  generalized Kepler's insights). He had an INSIGHT separate from his
 possible
  SENSORY past or SENSORY future.  In the words of Einstein in a speech on
  Kepler given on Kepler's 300th anniversary of his death:
 
  One can never see where a planet really is at any given moment, but only
 in
  what direction it can be seen just then from the Earth, which is itself
  moving in an unknown manner around the Sun. The difficulties thus seemed
  practically unsurmountable [by empirical means].
  Kepler had to discover a way of bringing order into this chaos.  The
  breakthrough was Kepler's Universal Mathematical Physics as he defined
 it,
  and NOT physical empirical cosmology (which he specifically REJECTS in
 his
  attack on Aristotle's SENSORY based beliefs).
 
  So what created this peak of human INSIGHT if compression of experienced
  patterns was not enough?  He did trade one theory for another but we
 call
  that thinking, and he didn't use empiricism to do it, he hypothesized new
  patterns and compressed them until they could not be disproved
  empirically... (this is a major difference from how modern science in
  executed, where most researchers actually give way, way too much worth to
  new theories arising from their experimental results, instead of simply
  removing theories

Re: [agi] Simplistic Paradigms Are Not Substitutes For Insight Into Conceptual Complexity

2008-05-31 Thread Tudor Boloni
Jim, We will eventually stumble upon this conceptual complexity, namely a
few algorithms that exceed the results that human intelligence uses (the
algorithms created through slow evolution and relatively fast learning).  we
would have a smarter machine that exhibits advanced intelligence in many
ways... maybe capable of self learning to ever higher levels and then
nothing else if needed, except that:

today, we dont know how to extract sufficient patterns yet from natural
language without additional training/trainers because languages reflect the
unique histories of the respective races.  Your conceptual complexity laden
program full of insights would need to be trained in these cases anyway, no
matter how insightful it became (think Wolfram's Computational Equivalence
theory, where some things are really not pattern matching but must be
simulated to the last detail to be fully understood due to their complex
nature).  So why start out with something that goes back to training issues
anyway and is not even available today.

Alternatively, semantic webs from expert systems will become more available
every year, the permutations of the objects contained therein will not be
exhaustive searches of the truly unrealistic search space that would result,
but are more like Deep Blue's solutions using trade offs of time and quality
of knowledge. Many permutations would never even be attempted because
objects are in different classes and context rules determine areas with a
high potential for valuable insights that would be favored.  The constant
self organization of the program and its database according to the rules of
maximal lossless compression would insure that a given set of computational
resources becomes intelligent over time.  Letting such a system read CYC
type databases will further reduce the search space of interest.

The benefit is this can be done sooner with the knowledge we have today.

t

On Sat, May 31, 2008 at 4:38 PM, Jim Bromer [EMAIL PROTECTED] wrote:

 Suppose that an advocate of behaviorism and reinforcement was able to make
 a successful general AI program that was clearly far in advance of any other
 effort.  At first I might argue that his advocacy of behaviorism and
 reinforcement was only an eccentricity, that his program must be coded with
 some greater complexity than simple reinforcement to produce true learning.
 Now imagine that everyone got to examine his code, and after studying it I
 discovered that it was amazingly simple in concept.  For example, suppose
 the programmer only used 32 methods to combine or integrate referenced data
 objects and these 32 methods were used randomly in the program to combine
 behaviors that were to be reinforced by training.  At first, I might argue
 that the 32 simple internal methods of combining data or references wasn't
 truly behaviorist because behaviorism was only concerned with the observable
 gross behavior of an animal.  My criticism would be somewhat valid, but it
 would quickly be seen as petty quibbling and non-instructive because, in
 this imagined scenario, the efficacy of the program is so powerful, and the
 use of 32 simple integration methods along with a reinforcement of
 observable 'behaviors' so simple, that my criticism against the programmer's
 explanation of the paradigm would be superficial.  I might claim that it
 would be more objective to drop the term behaviorist in favor of the use of
 some more objective explanation using familiar computational terms, but even
 this would be a minor sub-issue compared to the implications of the success
 of the paradigm.



 The programmer in my fictional story could claim that the simplest
 explanation for the paradigm could qualify as the most objective
 description.  While he did write 32 simple internal operations, the
 program had to be trained through the reinforcement of its observable
 'behavior', so it would qualify as a true behavioral-reinforcement method.
 People could make the case that they could improve on the program by
 including more sophisticated methods in the program, but the simplest
 paradigm that could produce the desired effects would still suffice as an
 apt description of the underlying method.



 Now there are a number of reasons why I do not think that a simple
 reinforcement scheme, like the one I mentioned in my story, will be first to
 produce higher intelligence or even feasible as a model for general use.  The
 most obvious one, is that the number of combinations of data objects that
 are possible when strung together would be so great, that it would be very
 unlikely that the program would stumble on insight through a simplistic
 reinforcement method as described.  And it would be equally unlikely that
 the trainer would have the grasp of the complexity of possible combinations
 to effectively guide the program toward that unlikely goal.  To put it
 another way, the simplistic reinforcement paradigm is really only a
 substitute for insightful 

Re: [agi] U.S. Plan for 'Thinking Machines' Repository

2008-05-29 Thread Tudor Boloni
i have higher hopes for the project than richard, failing to see the
circular causality alluded to... first, human intellect is quickly
overwhelmed when trying to build logic structures with complex relationships
or even many simple relationships strung together (we max at four or five
recursions in our working memory e.g. unlike richard, we dont think that he
didnt tell them what they couldnt do without us)... so we outsource the task
to improved methods of abstraction/simplification, programming and more
common analogies... second,  its ALWAYS about the relationships between
objects/concepts, and finding all possibly useful arrangements does appear
to be a readily finite problem, one amenable to exhaustive search algorithms
it appears...

as a side note, does anyone else feel that intelligence and compression (or
less formally the ability to summarize) are identical?

and the last bit: isnt this guy already doing the things NIST proposes? its
just corporate IP and not too available? (not affiliated in any way):
http://www.knowledgefoundations.com/Papers.html

cheers,
tudor

On Thu, May 29, 2008 at 5:02 PM, Richard Loosemore [EMAIL PROTECTED]
wrote:

 Brad Paulsen wrote:

 Fellow AGI-ers,

 At the risk of being labeled the list's newsboy...

 U.S. Plan for 'Thinking Machines' Repository
 Posted by samzenpus on Wednesday May 28, @07:19PM
 from the save-those-ideas-for-later dept.




 An anonymous reader writes Information scientists organized by the U.S.'s
 NIST say they will create a concept bank that programmers can use to build
 thinking machines that reason about complex problems at the frontiers of
 knowledge - from advanced manufacturing to biomedicine. The agreement by
 ontologists - experts in word meanings and in using appropriate words to
 build actionable machine commands - outlines the critical functions of the
 Open Ontology Repository (OOR). More on the summit that produced the
 agreement here.




 Interesting, but I am araid that whenever I see someone report a project to
 collect all the world's knowledge in a nice, centralized format (Cyc, and
 Daughters-of-Cyc) I cannot help but think of one of the early chapters in
 Neal Stephenson's Quicksilver, were Wilkins, Leibnitz and others are trying
 to form a universal grammar in which all the world's facts can be organized
 in such a way that a (essentially) a thinking machine can be built.

 Stephenson illustrates the foolishness of this quest with humor, but it is
 a deeply thought-provoking humor.

 The main thought that it provokes are these.  If we can build something
 that can use those facts, it must be smart enough to be able collect such
 information by itself.  Not only that, but the way that this hypothetical
 machine would collect and use the facts might well be such that the format
 in which the knowledge is represented will be critically dependent on the
 way that the using and collecting processes operate, and not necessarily
 like the format that we choose, ahead of time.

 Today, we most emphatically do not have a system that knows how to (fully)
 use and collect such facts.

 Therefore... there is a great danger that any such collection will be
 useless until the 'thinking machinery' itself is built, and then, when the
 machinery does get built, the collection of facts will be superfluous.

 Rather like my 8-year-old son (bless his heart) who, confronted with an
 essay project that he could not face, started off by spending four solid
 hours getting the fonts, colors and backgrounds just right.

 Just my gut feeling, that's all:  carry on with what you are doing.



 Richard Loosemore



 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com