Hi Ben,
Got 5 so I thought I'd work through your points.

On Fri, May 22, 2015 at 3:33 AM, Benjamin Kapp <[email protected]> wrote:

> I agree that we should do our best to avoid obscure terminology, because
> it will make it easier for us to understand each other.
>
> Regarding surveying the field of AGI.  It seems that everyone who does
> such a survey talks about the cyc project (hard code all knowledge of the
> world into your system).  It seems like this information could be useful
> for AGI, but of course the methodology seems fundamentally flawed since
> that isn't how humans acquire their knowledge.
>

Here you have touched on something important and under-appreciated. The AGI
program is less about the AGI knowing anything and more about 'finding out'
from a point of ignorance. I see CYC and even Wikipedia as a new abstract
compendium of knowledge that is utterly meaningless to any AGI that is not
in the world like we are, and that must, to be a real AGI, be able to *add
to*. Autonomously. Not via a human telling it what new knowledge looks
like. The H-AGI substrate (wet or dry) is there to give the entity access
to the world in a way that is more like humans. What way is that? Well that
is for the IGI to investigate. That is what is un-explored in the AI and
AGI programs to date.


> But humans are born with some knowledge of the world, for example babies
> are born with the ability to swim, and to avoid crawling off of cliffs and
> such or to put it more simply humans are born with "instincts" which can be
> thought of as a kind of knowledge.  Perhaps we could give our AGI the cyc
> knowledge as "instinctual" knowledge?  But we wouldn't expect our AGI to
> exclusively acquire knowledge in this fashion since, hardcoding all of
> human knowledge isn't necessary, and difficult (if not impossible).
> Although it is my understanding that the cyc project has begun to utilize
> automated means of adding knowledge to their system, and as such it isn't
> fair to say they only go about hardcoding knowledge of the world.
>

That CYC automation is, of course something created by a human, not
something innate I in the C-AGI that CYC is. It may be useful! Worth doing.
But not what an H-AGI is doing. An H-AGI may actually have such C-AGI
knowledge hardwired that somehow might be called instinct and/or reflexes.
I can see that in an H-AGI.


>
> Of course automated knowledge acquisition is precisely the method used by
> IBM's Watson.  For example it read all of wikipedia and generated knowledge
> of the world from this.  Interestingly they also read the urban dictionary,
> but found that it degraded the performance of their system, and so they had
> to "unlearn" this.  Garbage in garbage out it would seem.  An interesting
> aspect of IBM's Watson is that they do indeed use a kind of hybrid system.
> They have many algorithms which they use when they try to answer Jeopardy
> questions, and they have another system that measures the confidence of the
> results of each of these algorithms and the one with the highest confidence
> is selected as the final answer.  Of course they are just doing
> computational linguistics, and so their system has no hope of working on
> non text based modalities, which is a serious limitation, and certainly an
> indication that it isn't really AGI.
>
> So it would seem that our system would have as a necessary requirement
> that it can work with all kinds of input, be it sight, sound, touch, etc.
> Although if it had the ability to query Watson as a module in some part of
> the system, it isn't immediately obvious to me that this would be a bad
> thing.  It would just be the case that the Watson approach in and of its
> self is not viable for creating real AGI.  If i had the ability to query
> google in my brain, surely google wouldn't become a kind of AGI even though
> it would be part of an AGI (namely me). but the essential part of the
> system that was creating the AGI would be my brain, not google.  And so
> perhaps it doesn't make much sense to just wire together a bunch of non agi
> systems in the hopes that together they will create AGI?
>

I can see now you have 'got it'.

These knowledge abstractions Watson. Google, Wikipedia CYC etc etc can be
integrated in as a sensory/perceptual system to the core biophysical
substrate. That will add to knowledge acquired through the traditional
sensory modes (vision, audition, etc). That makes it H-AGI. In an of
themselves, Watson, google etc can be thought of as not being H-AGI, not
because they are comparatively ignorant, but because humans are built into
the process of finding out new knowledge or telling them what new knowledge
looks like. Humans are a kind of 'intrinsic puppeteer'. In contrast an
H-AGI will acquire new knowledge on its own. Like humans. Our human
substrate is intrinsically able to *learn how to learn* new things. That is
something a human baby has that no AI has. In a very real way the H-AGI
program is about building such a baby, But not a human level baby. Maybe
worm, ant or bee, first.

The H-AGI idea is about knowledge dynamics (change), not any particular
knowledge. Mathematically H-AGI is about

dKnowledge(t)/dt  , not Knowledge(t) itself.

That's where the differences (I predict) will be scientifically measured
(by the IGI!). That understanding may then feed back into traditional
computer-only approaches. Don't know. We have to do it to find out.

Am not sure how to express this concisely in the planned paper/document. It
probably needs to be expressed somehow. I'll have a go and maybe we can
workshop it into a formal position.


>
>
> On Thu, May 21, 2015 at 12:29 PM, Benjamin Kapp <[email protected]> wrote:
>
>> How about we have a discussion sometime this weekend?  We have some
>> serious timezone issues to work around..
>>
>> On Wed, May 20, 2015 at 10:48 PM, Mark Seveland <[email protected]>
>> wrote:
>>
>>> Just a suggestion. Google+ Meetups are a good way for everyone to meet
>>> each other, and in live voice and/or video chat discuss topics.
>>>
>>> On Wed, May 20, 2015 at 7:33 PM, Colin Hales <[email protected]>
>>> wrote:
>>>
>>>> Hi Dorian et. al.,
>>>> I am having trouble getting time to properly participate here because
>>>> of family stuff and my other commitments. I'm checking in to acknowledge
>>>> how encouraging it is to see the activity is ongoing, and the birth of a
>>>> possible paper that might underpin whatever this IGI initiative turns into.
>>>>
>>>> I'd like to focus my efforts on the paper primarily as a way to
>>>> discover IGI directions. So if you could bear with a patchy contribution
>>>> from me for a little while it would be greatly appreciated. I have a
>>>> particularly difficult week ahead of me. There's no huge crashing need for
>>>> speed here, so I'm hoping slow and steady might be OK.
>>>>
>>>> Whatever form this website takes: fantastic. It may only ever be a
>>>> 'line in the sand'. But it's a significant one in the greater scheme of AGI
>>>> futures and really good to see after being sidelined for so long. Yay!
>>>>
>>>> cheers
>>>> Colin Hales
>>>>
>>>>
>>>>
>>>> On Thu, May 21, 2015 at 10:07 AM, Mike Archbold <[email protected]>
>>>> wrote:
>>>>
>>>>> Why don't you just call it "AI" and if somebody asks THEN you can
>>>>> clarify it?  I mean, why be arcane about it?  One of the reasons I got
>>>>> into AI is because I don't like the way that people create things that
>>>>> are intentionally difficult and known only to the in-group.  Now here
>>>>> you go with a boatload of new acronyms, known only to the select tiny
>>>>> group that knows the secret meaning behind it.  So, I guess I am
>>>>> getting into Alan Grimes vent space with this.
>>>>>
>>>>> On 5/20/15, Dorian Aur <[email protected]> wrote:
>>>>> > *Colin et al,*
>>>>> >
>>>>> >
>>>>> > A possible plan for H-AGI towards S-AGI paper
>>>>> >
>>>>> >
>>>>> >
>>>>> > *Hybrid artificial general intelligent systems towards S-AGI*
>>>>> >
>>>>> >
>>>>> >
>>>>> >
>>>>> >
>>>>> > *Introduction* – a short presentation of AI systems and general goal
>>>>> to
>>>>> > build human general intelligence
>>>>> >
>>>>> >
>>>>> >
>>>>> > Why H-AGI?
>>>>> >
>>>>> >    - Present different forms of computation , ( particular forms of
>>>>> >    computation analog, digital -Turing machines )
>>>>> >    - Computations in the brain (examples of computations that are
>>>>> hardly
>>>>> >    replicated on digital computers)
>>>>> >    - H-AGI can include all forms of computations, algorithmic /
>>>>> >    non-algorithmic, analog, digital,* quantum and classical *since
>>>>> >     biological structure is incorporated in the system
>>>>> >
>>>>> >
>>>>> >
>>>>> > *Steps to develop  H-AGI*
>>>>> >
>>>>> >
>>>>> >
>>>>> >    - A.  Build the structure using either natural stem cells or
>>>>> induced
>>>>> >    pluripotent cells  a three-dimensional vascularized structure,
>>>>> test 3D
>>>>> >    printing possibilities
>>>>> >    - Shape the structure and control  spatial organization of cells
>>>>> >    - Detect the need of neurotrophic factors, nutrients and oxygen
>>>>> ...use
>>>>> >    nanosensor devices, carbon nanotubes...
>>>>> >    - Regulate, control the entire phenomenon using a computer
>>>>> interface,
>>>>> >    ability to use combine analog/digital and biophysical computations
>>>>> >
>>>>> > B. Train the hybrid system
>>>>> >
>>>>> >    - Enhance bidirectional communication between biological
>>>>> structure and
>>>>> >    computers
>>>>> >    - Create and use  a virtual world to provide accelerated
>>>>> training, use
>>>>> >    machine learning, DL,  digital/algorithmic  AI or AGI if
>>>>> something is
>>>>> >    developed on digital systems
>>>>> >    - The interactive training system should also shape the evolution
>>>>> of
>>>>> >    biological structure,  natural language and visual information
>>>>> can be
>>>>> >    progressively included
>>>>> >
>>>>> >  see  details in Can we build a conscious machine,
>>>>> > http://arxiv.org/abs/1411.5224
>>>>> >
>>>>> >
>>>>> > *Goals of H-AGI*
>>>>> >
>>>>> > H-AGI  can be seen as a transitional step required to understand
>>>>> which
>>>>> > parts can be fully replicated in a synthetic form to  build a more
>>>>> powerful
>>>>> > system,
>>>>> >
>>>>> > ·        Natural language processing, robotics...
>>>>> >
>>>>> > ·        Space exploration, colonization..... etc
>>>>> >
>>>>> > ·        Techniques for therapy (brain diseases, cancer ....) since
>>>>> we will
>>>>> > learn how to shape biological structure
>>>>> >
>>>>> >
>>>>> >
>>>>> >
>>>>> > Dorian
>>>>> >
>>>>> >
>>>>> > PS This brief presentation may  also provide an idea about possible
>>>>> > collaboration list 1- list 3
>>>>> >
>>>>> >
>>>>> >
>>>>> > On Tue, May 19, 2015 at 11:20 PM, Mike Archbold <[email protected]
>>>>> >
>>>>> > wrote:
>>>>> >
>>>>> >> > A summary ....we are looking at the idea that there are 2
>>>>> fundamental
>>>>> >> kinds
>>>>> >> > of putative AGI (1) & (3), and their hybrid (2) that forms a third
>>>>> >> approach
>>>>> >> > as follows:
>>>>> >> >
>>>>> >> > (1) C-AGI      computer substrate only. Neuromorphic equivalents
>>>>> of it.
>>>>> >> > (2) H-AGI      hybrid of (1) and (3). The inorganic version is a
>>>>> new
>>>>> >> > kind
>>>>> >> > of neuromorphic chip. The organic version has ... erm... organics
>>>>> in
>>>>> >> > it.
>>>>> >> > (3) S-AGI      synthetic AGI. organic or inorganic. Natural brain
>>>>> >> > physics
>>>>> >> > only. No computer.
>>>>> >> >
>>>>> >> > (aside: S-AGI just came out of my fingers. I hope this is OK,
>>>>> Dorian!)
>>>>> >> >
>>>>> >>
>>>>> >> This is a cool idea, somewhat mind boggling in its possibilities.
>>>>> >> Cool though!
>>>>> >>
>>>>> >> Personally I would favor something more like "EM-AGI" for
>>>>> >> electromagnetic AGI.  I mean, I don't understand the details of the
>>>>> >> approach, only the generalities.  But, "S" seems a bit
>>>>> vague/ambiguous
>>>>> >> while EM hits it more or less on target IMHO.
>>>>> >>
>>>>> >> MIke A
>>>>> >>
>>>>> >>
>>>>> >> > Think this way: What we have now is 100% computer. S-AGI is 100%
>>>>> >> > natural
>>>>> >> > physics (organic or inorganic). H-AGI is set somewhere in between.
>>>>> >> > It's
>>>>> >> > the level of computer computation/natural computation that is at
>>>>> issue.
>>>>> >> All
>>>>> >> > are computation.
>>>>> >> >
>>>>> >> > The human brain is a natural version of (3) with a
>>>>> neuronal/astrocyte
>>>>> >> >  substrate. (3) has no computer whatever in it. it retains all the
>>>>> >> natural
>>>>> >> > physics (whatever that is). H-AGI targets the inclusion of the
>>>>> >> > essential
>>>>> >> > natural brain physics in the substrate of (2) and to incorporate
>>>>> (1)
>>>>> >> > computer-substrates and software to an extent to be determined.
>>>>> In my
>>>>> >> case
>>>>> >> > an H-AGI would be inorganic. Others see differently.
>>>>> >> >
>>>>> >> > Where you might have a stake in this?
>>>>> >> >
>>>>> >> > The history of AGI can be summed up as an experiment that seeks
>>>>> to see
>>>>> >> > if
>>>>> >> > the role of (1) C-AGI as a brain is fundamentally
>>>>> indistinguishable
>>>>> >> > from
>>>>> >> > (3) S-AGI under all conditions. That is the hypothesis. The 65
>>>>> year old
>>>>> >> bet
>>>>> >> > that has attracted 100% of the investment to date. H-AGI does not
>>>>> make
>>>>> >> that
>>>>> >> > presupposition and seeks to contrast (1) and (3) in revealing
>>>>> ways that
>>>>> >> > then allow us to speak authoritatively about the (1)/(3)
>>>>> relationship
>>>>> >> > in
>>>>> >> > AGI potential. Only then will we really understand the difference
>>>>> >> > between
>>>>> >> > (1) and (3). So far that difference is entirely and intuition. A
>>>>> good
>>>>> >> one.
>>>>> >> > But only intuition. Its time for that intuition to be turned into
>>>>> >> science.
>>>>> >> > Experiments in (1) have ruled to date. Now we seek to do some
>>>>> (2)...
>>>>> >> > E.E.
>>>>> >> > we have 65 years of 'control' subject. H-AGI builds the first
>>>>> 'test'
>>>>> >> > subject.
>>>>> >> >
>>>>> >> > How about this?
>>>>> >> >
>>>>> >> > What would be super cool is if this mighty AGI beast you intend
>>>>> making
>>>>> >> > could be turned into the brain of a robot. Then we could contrast
>>>>> what
>>>>> >> > it
>>>>> >> > does with what an IGI candidate brain does in an identical robot
>>>>> in the
>>>>> >> > same test. That kind of testing vision (as far off as it may
>>>>> seem) is a
>>>>> >> > potential way your work and the IGI might interface. Which
>>>>> candidate
>>>>> >> robot
>>>>> >> > best encounters radical novelty, without any human
>>>>> >> intervention/involvement
>>>>> >> > whatever? .... is a really good question. To do this test you'd
>>>>> not
>>>>> >> > need
>>>>> >> to
>>>>> >> > reveal anything about its workings. Observed robot behaviour is
>>>>> >> > decisive.
>>>>> >> >
>>>>> >> > It seems to me that whatever venture you plan, it might be wise
>>>>> to keep
>>>>> >> an
>>>>> >> > eye on any (2)/(3) approaches. IGI or not. Because it is directly
>>>>> >> informing
>>>>> >> > expectations of outcomes in (1). We are currently asking the
>>>>> question
>>>>> >> "*If
>>>>> >> > H-AGI were to be championed into existence, what would the first
>>>>> >> > vehicle
>>>>> >> > for that look like?*" If the enthusiasm maintains it will be
>>>>> sketched
>>>>> >> into
>>>>> >> > a web page and we'll see what it tells us and what to do next. It
>>>>> may
>>>>> >> halt.
>>>>> >> > It may go. I don't know. Worth a shot? You bet.
>>>>> >> >
>>>>> >> > With your (1) C-AGI glasses firmly strapped to your head, your
>>>>> wisdom
>>>>> >> > at
>>>>> >> > all stages in this would be well received, whatever the messages.
>>>>> So if
>>>>> >> you
>>>>> >> > have time to keep an  eye on happenings, I for one would
>>>>> appreciate it.
>>>>> >> >
>>>>> >> > regards
>>>>> >> >
>>>>> >> > Colin Hales
>>>>> >> >
>>>>> >> >
>>>>> >> >
>>>>> >> > On Wed, May 20, 2015 at 6:58 AM, Peter Voss <[email protected]>
>>>>> wrote:
>>>>> >> >
>>>>> >> >> Thanks for asking. Haven’t followed the IGI discussions.
>>>>> >> >>
>>>>> >> >>
>>>>> >> >>
>>>>> >> >> Is this about non-computer based approaches to AGI?  If so, I
>>>>> don’t
>>>>> >> think
>>>>> >> >> I have anything positive to contribute.
>>>>> >> >>
>>>>> >> >>
>>>>> >> >>
>>>>> >> >> More generally, non-profit orgs need strong focus and
>>>>> champions.  And
>>>>> >> >> specific goals.
>>>>> >> >>
>>>>> >> >>
>>>>> >> >>
>>>>> >> >> *From:* Benjamin Kapp [mailto:[email protected]]
>>>>> >> >> *Sent:* Tuesday, May 19, 2015 12:23 PM
>>>>> >> >> *To:* AGI
>>>>> >> >> *Subject:* Re: [agi] Institute of General Intelligence (IGI)
>>>>> >> >>
>>>>> >> >>
>>>>> >> >>
>>>>> >> >> Mr. Voss,
>>>>> >> >>
>>>>> >> >> Given your understanding of the AGI community do you believe an
>>>>> IGI
>>>>> >> would
>>>>> >> >> be redundant?  Would your organization be open to collaborating
>>>>> with
>>>>> >> >> the
>>>>> >> >> IGI?  Do you have any advice for how we could be successful in
>>>>> >> >> starting
>>>>> >> >> up
>>>>> >> >> this organization?  Perhaps you would be open to being a member
>>>>> of the
>>>>> >> >> board?
>>>>> >> >>
>>>>> >> >>
>>>>> >> >>
>>>>> >> >> On Tue, May 19, 2015 at 2:03 PM, Peter Voss <[email protected]>
>>>>> wrote:
>>>>> >> >>
>>>>> >> >> Not something that can be adequately covered in a few words,
>>>>> but….
>>>>> >> “We’re
>>>>> >> >> building a fully integrated, top-down & bottom-up, real-time,
>>>>> adaptive
>>>>> >> >> knowledge (& skill) representation, learning and reasoning
>>>>> engine.
>>>>> >> >> We’re
>>>>> >> >> using a combination of graph representation and NN techniques
>>>>> overlaid
>>>>> >> >> with
>>>>> >> >> fuzzy, adaptive rule systems” – ha!
>>>>> >> >>
>>>>> >> >>
>>>>> >> >>
>>>>> >> >> Here again are links for some clues:
>>>>> >> >>
>>>>> >> >>
>>>>> >> >>
>>>>> >> >>
>>>>> >> >>
>>>>> >>
>>>>> http://www.kurzweilai.net/essentials-of-general-intelligence-the-direct-path-to-agi
>>>>> >> >>
>>>>> >> >> http://www.realagi.com/index.html
>>>>> >> >>
>>>>> >> >> https://www.facebook.com/groups/RealAGI/
>>>>> >> >>
>>>>> >> >>
>>>>> >> >>
>>>>> >> >>
>>>>> >> >>
>>>>> >> >> *From:* Benjamin Kapp [mailto:[email protected]]
>>>>> >> >>
>>>>> >> >>
>>>>> >> >>
>>>>> >> >> Mr. Voss,
>>>>> >> >>
>>>>> >> >> Since you are the founder I'm certain you know what agi-3's
>>>>> >> >> methodology
>>>>> >> >> is.  In a few words (maybe more?) could you share with us what
>>>>> that
>>>>> >> >> is?
>>>>> >> >>
>>>>> >> >>
>>>>> >> >>
>>>>> >> >> On Tue, May 19, 2015 at 1:24 PM, Peter Voss <[email protected]>
>>>>> wrote:
>>>>> >> >>
>>>>> >> >> *>*http://www.agi-3.com  They just glue together anything and
>>>>> >> everything
>>>>> >> >> that works.
>>>>> >> >>
>>>>> >> >> Actually, no.  We have a very specific theory of AGI and
>>>>> architecture
>>>>> >> >>
>>>>> >> >> *Peter Voss*
>>>>> >> >>
>>>>> >> >> *Founder, AGI Innovations Inc.*
>>>>> >> >>
>>>>> >> >> *AGI* | Archives <
>>>>> https://www.listbox.com/member/archive/303/=now>
>>>>> >> >> <
>>>>> https://www.listbox.com/member/archive/rss/303/26973278-698fd9ee>|
>>>>> >> >> Modify
>>>>> >> >> <https://www.listbox.com/member/?&;> Your Subscription
>>>>> >> >>
>>>>> >> >> <http://www.listbox.com>
>>>>> >> >>
>>>>> >> >>
>>>>> >> >>
>>>>> >> >> *AGI* | Archives <
>>>>> https://www.listbox.com/member/archive/303/=now>
>>>>> >> >> <https://www.listbox.com/member/archive/rss/303/231420-b637a2b0
>>>>> >|
>>>>> >> Modify
>>>>> >> >> <https://www.listbox.com/member/?&;> Your Subscription
>>>>> >> >>
>>>>> >> >> <http://www.listbox.com>
>>>>> >> >>
>>>>> >> >>
>>>>> >> >>   *AGI* | Archives <
>>>>> https://www.listbox.com/member/archive/303/=now>
>>>>> >> >> <
>>>>> https://www.listbox.com/member/archive/rss/303/11721311-20a65d4a> |
>>>>> >> >> Modify
>>>>> >> >> <https://www.listbox.com/member/?&;>
>>>>> >> >> Your Subscription <http://www.listbox.com>
>>>>> >> >>
>>>>> >> >
>>>>> >> >
>>>>> >> >
>>>>> >> > -------------------------------------------
>>>>> >> > AGI
>>>>> >> > Archives: https://www.listbox.com/member/archive/303/=now
>>>>> >> > RSS Feed:
>>>>> >> https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
>>>>> >> > Modify Your Subscription:
>>>>> >> > https://www.listbox.com/member/?&;
>>>>> >> > Powered by Listbox: http://www.listbox.com
>>>>> >> >
>>>>> >>
>>>>> >>
>>>>> >> -------------------------------------------
>>>>> >> AGI
>>>>> >> Archives: https://www.listbox.com/member/archive/303/=now
>>>>> >> RSS Feed:
>>>>> >> https://www.listbox.com/member/archive/rss/303/17795807-366cfa2a
>>>>> >> Modify Your Subscription:
>>>>> >> https://www.listbox.com/member/?&;
>>>>> >> Powered by Listbox: http://www.listbox.com
>>>>> >>
>>>>> >
>>>>> >
>>>>> >
>>>>> > -------------------------------------------
>>>>> > AGI
>>>>> > Archives: https://www.listbox.com/member/archive/303/=now
>>>>> > RSS Feed:
>>>>> https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
>>>>> > Modify Your Subscription:
>>>>> > https://www.listbox.com/member/?&;
>>>>> > Powered by Listbox: http://www.listbox.com
>>>>> >
>>>>>
>>>>>
>>>>> -------------------------------------------
>>>>> AGI
>>>>> Archives: https://www.listbox.com/member/archive/303/=now
>>>>> RSS Feed:
>>>>> https://www.listbox.com/member/archive/rss/303/11721311-20a65d4a
>>>>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>>>> Powered by Listbox: http://www.listbox.com
>>>>>
>>>>
>>>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>> <https://www.listbox.com/member/archive/rss/303/27079473-66e47b26> |
>>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>>> <http://www.listbox.com>
>>>>
>>>
>>>
>>>
>>> --
>>> Regards,
>>> Mark Seveland
>>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/26973278-698fd9ee> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/11721311-20a65d4a> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to