Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread William Pearson
2009/1/9 Ben Goertzel b...@goertzel.org:
 This is an attempt to articulate a virtual world infrastructure that
 will be adequate for the development of human-level AGI

 http://www.goertzel.org/papers/BlocksNBeadsWorld.pdf

goertzel.org seems to be down. So I can't refresh my memory of the paper.

 Most of the paper is taken up by conceptual and requirements issues,
 but at the end specific world-design proposals are made.

 This complements my earlier paper on AGI Preschool.  It attempts to
 define what kind of underlying virtual world infrastructure an
 effective AGI preschool would minimally require.


In some ways this question is under defined. It depends what the
learning system is like. If it is like a human brain it would need a
sufficiently (lawfully) changing world to stimulate its neural
plasticity (rain, seasons, new buildings, death of pets, growth of its
own body).  That is a never ending series of connectible but new
situations to push the brain in different directions. Cat's eyes
deprived of stimulation go blind, so a brain in an unstimulating
environment might fail to develop.

So I would say that not only are certain dynamics important but there
should also be a large variety of externally presented examples.
Consider for example learning electronics, the metaphor of rivers and
dams is often used to teach it, but if the only example of fluid
dynamics you have come across is a flat pool of beads, then you might
not get the metaphor.  Similarly a kettle boiling dry might be used to
teach about part of the water cycle.

There may be lots of other subconscious  analogies of these sorts that
have to be made when we are young that we don't know about. It would
be my worry when implementing a virtual world for AI development.

If it is not like a human brain (in this respect), then the question
is a lot harder. Also are you expecting the AIs to make tools out of
the blocks and beads?

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
Yes, I'm expecting the AI to make tools from blocks and beads

No, i'm not attempting to make a detailed simulation of the human
brain/body, just trying to use vaguely humanlike embodiment and
high-level mind-architecture together with computer science
algorithms, to achieve AGI

On Tue, Jan 13, 2009 at 5:56 AM, William Pearson wil.pear...@gmail.com wrote:
 2009/1/9 Ben Goertzel b...@goertzel.org:
 This is an attempt to articulate a virtual world infrastructure that
 will be adequate for the development of human-level AGI

 http://www.goertzel.org/papers/BlocksNBeadsWorld.pdf

 goertzel.org seems to be down. So I can't refresh my memory of the paper.

 Most of the paper is taken up by conceptual and requirements issues,
 but at the end specific world-design proposals are made.

 This complements my earlier paper on AGI Preschool.  It attempts to
 define what kind of underlying virtual world infrastructure an
 effective AGI preschool would minimally require.


 In some ways this question is under defined. It depends what the
 learning system is like. If it is like a human brain it would need a
 sufficiently (lawfully) changing world to stimulate its neural
 plasticity (rain, seasons, new buildings, death of pets, growth of its
 own body).  That is a never ending series of connectible but new
 situations to push the brain in different directions. Cat's eyes
 deprived of stimulation go blind, so a brain in an unstimulating
 environment might fail to develop.

 So I would say that not only are certain dynamics important but there
 should also be a large variety of externally presented examples.
 Consider for example learning electronics, the metaphor of rivers and
 dams is often used to teach it, but if the only example of fluid
 dynamics you have come across is a flat pool of beads, then you might
 not get the metaphor.  Similarly a kettle boiling dry might be used to
 teach about part of the water cycle.

 There may be lots of other subconscious  analogies of these sorts that
 have to be made when we are young that we don't know about. It would
 be my worry when implementing a virtual world for AI development.

 If it is not like a human brain (in this respect), then the question
 is a lot harder. Also are you expecting the AIs to make tools out of
 the blocks and beads?

  Will


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

This is no place to stop -- half way between ape and angel
-- Benjamin Disraeli


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread William Pearson
2009/1/13 Ben Goertzel b...@goertzel.org:
 Yes, I'm expecting the AI to make tools from blocks and beads

 No, i'm not attempting to make a detailed simulation of the human
 brain/body, just trying to use vaguely humanlike embodiment and
 high-level mind-architecture together with computer science
 algorithms, to achieve AGI

I wasn't suggesting you were/should. The comment about ones own
changing body was simply one of the many examples of things that
happen in the world that we have to try and cope with and adjust to,
making our brains flexible and leading to development rather than
stagnation.

As we don't have a formal specification for all the mind agents in
opencog it is hard to know how it will actually learn.  The question
is how humanlike do you have to be for the problem of lack of varied
stimulation to lead to developmental problems. If you emphasised that
you were going to make the world the AI exist in alive, that is not
just play pens for the AI/humans to do things and see results of those
things but some sort of consistent ecology, I would be happier. Humans
managed to develop fairly well before there was such thing as
structured pre-school, the replication of that sort of system seems
more important for AI growth, as humans still develop there as well as
structured teacher lead pre-school.

Since I can now get to the paper some further thoughts. Concepts that
would seem hard to form in your world is organic growth and phase
changes of materials. Also naive chemistry would seem to be somewhat
important (cooking, dissolving materials, burning: these are things
that a pre-schooler would come into contact more at home than in
structured pre-school).

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
Hi,

 Since I can now get to the paper some further thoughts. Concepts that
 would seem hard to form in your world is organic growth and phase
 changes of materials. Also naive chemistry would seem to be somewhat
 important (cooking, dissolving materials, burning: these are things
 that a pre-schooler would come into contact more at home than in
 structured pre-school).

Actually, you could probably get plantlike growth using beads, via
methods similar to L-systems (used in graphics for simulating plant
growth)

Woody plants could be obtained using a combination of blocks and
beads, as well..

Phase changes would probably arise via phase transitions in bead
conglomerates, with the control parameters driven by changes in
adhesion

However, naive chemistry would exist only in a far more primitive form
than in the real world, I'll have to admit.  This is just a
shortcoming of the BlocksNBeadsWorld, and I think it's an acceptable
one...

ben


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Russell Wallace
Melting and boiling at least should be doable: assign every bead a
temperature, and let solid interbead bonds turn liquid above a certain
temperature and disappear completely above some higher temperature.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Russell Wallace
And it occurs to me you could even have fire. Let fire be an element,
whose beads have negative gravitational mass. Beads of fuel elements
like wood have a threshold temperature above which they will turn into
fire beads, with release of additional heat.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
Indeed...  but cake-baking just won't have the same nuances ;-)

On Tue, Jan 13, 2009 at 10:08 AM, Russell Wallace
russell.wall...@gmail.com wrote:
 Melting and boiling at least should be doable: assign every bead a
 temperature, and let solid interbead bonds turn liquid above a certain
 temperature and disappear completely above some higher temperature.


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

This is no place to stop -- half way between ape and angel
-- Benjamin Disraeli


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Russell Wallace
Yeah :-) though boiling an egg by putting it in a pot of boiling
water, that much I think should be doable.

On Tue, Jan 13, 2009 at 3:41 PM, Ben Goertzel b...@goertzel.org wrote:
 Indeed...  but cake-baking just won't have the same nuances ;-)

 On Tue, Jan 13, 2009 at 10:08 AM, Russell Wallace
 russell.wall...@gmail.com wrote:
 Melting and boiling at least should be doable: assign every bead a
 temperature, and let solid interbead bonds turn liquid above a certain
 temperature and disappear completely above some higher temperature.


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 b...@goertzel.org

 This is no place to stop -- half way between ape and angel
 -- Benjamin Disraeli


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Philip Hunt
2009/1/12 Ben Goertzel b...@goertzel.org:
 The problem with simulations that run slower than real time is that
 they aren't much good for running AIs interactively with humans... and
 for AGI we want the combination of social and physical interaction

There's plenty you can do with real-time interaction.

OTOH, there's lots you can do with batch processing, e.g. tweak the
AI's parameters, and see how it performs on the same task. And of
course you can have a regression test suite of tasks for the AI to
perform as you improve it. How useful this sort of approach is depends
on how much processing power you need: if processing is very
expensive, it makes less sense to re-run an extensive test suite
whenever you make a change.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Philip Hunt
2009/1/9 Ben Goertzel b...@goertzel.org:
 Hi all,

 I intend to submit the following paper to JAGI shortly, but I figured
 I'd run it past you folks on this list first, and incorporate any
 useful feedback into the draft I submit

Perhaps the paper could go into more detail about what sensory input
the AGI would have.

E.g. you might specify that its vision system would consist of 2
pixelmaps (binocular vision) each 1000x1000 pixels, in three colours
and 16 bits of intensity, updated 20 times per second.

Of course, you may want to specify the visual system differently, but
it's useful to say so and make your assumptions concrete.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
Actually, I view that as a matter for the AGI system, not the world.

Different AGI systems hooked up to the same world may choose to
receive different inputs from it

Binocular vision, for instance, is not necessary in a virtual world,
and some AGIs might want to use it whereas others don't...

On Tue, Jan 13, 2009 at 1:13 PM, Philip Hunt cabala...@googlemail.com wrote:
 2009/1/9 Ben Goertzel b...@goertzel.org:
 Hi all,

 I intend to submit the following paper to JAGI shortly, but I figured
 I'd run it past you folks on this list first, and incorporate any
 useful feedback into the draft I submit

 Perhaps the paper could go into more detail about what sensory input
 the AGI would have.

 E.g. you might specify that its vision system would consist of 2
 pixelmaps (binocular vision) each 1000x1000 pixels, in three colours
 and 16 bits of intensity, updated 20 times per second.

 Of course, you may want to specify the visual system differently, but
 it's useful to say so and make your assumptions concrete.

 --
 Philip Hunt, cabala...@googlemail.com
 Please avoid sending me Word or PowerPoint attachments.
 See http://www.gnu.org/philosophy/no-word-attachments.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

This is no place to stop -- half way between ape and angel
-- Benjamin Disraeli


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Matt Mahoney
My response to Ben's paper is to be cautious about drawing conclusions from 
simulated environments. Human level AGI has an algorithmic complexity of 10^9 
bits (as estimated by Landauer). It is not possible to learn this much 
information from an environment that is less complex. If a baby AI did perform 
well in a simplified simulation of the world, it would not imply that the same 
system would work in the real world. It would be like training a language model 
on a simple, artificial language and then concluding that the system could be 
scaled up to learn English.

This is a lesson from my dissertation work in network intrusion anomaly 
detection. This was a machine learning task in which the system was trained on 
attack-free network traffic, and then identified anything out of the ordinary 
as malicious. For development and testing, we used the 1999 MIT-DARPA Lincoln 
Labs data set consisting of 5 weeks of synthetic network traffic with hundreds 
of labeled attacks. The test set developers took great care to make the data as 
realistic as possible. They collected statistics from real networks, built an 
isolated network of 4 real computers running different operating systems, and 
thousands of simulated computers that generated HTTP requests to public 
websites and mailing lists, and generated synthetic email using English word 
bigram frequencies, and other kinds of traffic.

In my work I discovered a simple algorithm that beat the best intrusion 
detection systems available at the time. I parsed network packets into 
individual 1-4 byte fields, recorded all the values that ever occurred at least 
once in training, and flagged any new value in the test data as suspicious, 
with a score inversely proportional to the size of the set of values observed 
in training and proportional to the time since the previous anomaly.

Not surprisingly, the simple algorithm failed on real network traffic. There 
were too many false alarms for it to be even remotely useful. The reason it 
worked on the synthetic traffic was that it was algorithmically simple compared 
to real traffic. For example, one of the most effective tests was the TTL 
value, a counter that decrements with each IP routing hop, intended to prevent 
routing loops. It turned out that most of the attacks were simulated from a 
machine that was one hop further away than the machines simulating normal 
traffic.

A problem like that could have been fixed, but there were a dozen others that I 
found, and probably many that I didn't find. It's not that the test set 
developers weren't careful. They spent probably $1 million developing it 
(several people over 2 years). It's that you can't simulate the high complexity 
of thousands of computers and human users with anything less than that. Simple 
problems have simple solutions, but that's not AGI.

-- Matt Mahoney, matmaho...@yahoo.com


--- On Fri, 1/9/09, Ben Goertzel b...@goertzel.org wrote:

 From: Ben Goertzel b...@goertzel.org
 Subject: [agi] What Must a World Be That a Humanlike Intelligence May Develop 
 In It?
 To: agi@v2.listbox.com
 Date: Friday, January 9, 2009, 5:58 PM
 Hi all,
 
 I intend to submit the following paper to JAGI shortly, but
 I figured
 I'd run it past you folks on this list first, and
 incorporate any
 useful feedback into the draft I submit
 
 This is an attempt to articulate a virtual world
 infrastructure that
 will be adequate for the development of human-level AGI
 
 http://www.goertzel.org/papers/BlocksNBeadsWorld.pdf
 
 Most of the paper is taken up by conceptual and
 requirements issues,
 but at the end specific world-design proposals are made.
 
 This complements my earlier paper on AGI Preschool.  It
 attempts to
 define what kind of underlying virtual world infrastructure
 an
 effective AGI preschool would minimally require.
 
 thx
 Ben G
 
 
 
 -- 
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 b...@goertzel.org
 
 I intend to live forever, or die trying.
 -- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
Matt,

The complexity of a simulated environment is tricky to estimate, if
the environment contains complex self-organizing dynamics, random
number generation, and complex human interactions ...

ben

On Tue, Jan 13, 2009 at 1:29 PM, Matt Mahoney matmaho...@yahoo.com wrote:
 My response to Ben's paper is to be cautious about drawing conclusions from 
 simulated environments. Human level AGI has an algorithmic complexity of 10^9 
 bits (as estimated by Landauer). It is not possible to learn this much 
 information from an environment that is less complex. If a baby AI did 
 perform well in a simplified simulation of the world, it would not imply that 
 the same system would work in the real world. It would be like training a 
 language model on a simple, artificial language and then concluding that the 
 system could be scaled up to learn English.

 This is a lesson from my dissertation work in network intrusion anomaly 
 detection. This was a machine learning task in which the system was trained 
 on attack-free network traffic, and then identified anything out of the 
 ordinary as malicious. For development and testing, we used the 1999 
 MIT-DARPA Lincoln Labs data set consisting of 5 weeks of synthetic network 
 traffic with hundreds of labeled attacks. The test set developers took great 
 care to make the data as realistic as possible. They collected statistics 
 from real networks, built an isolated network of 4 real computers running 
 different operating systems, and thousands of simulated computers that 
 generated HTTP requests to public websites and mailing lists, and generated 
 synthetic email using English word bigram frequencies, and other kinds of 
 traffic.

 In my work I discovered a simple algorithm that beat the best intrusion 
 detection systems available at the time. I parsed network packets into 
 individual 1-4 byte fields, recorded all the values that ever occurred at 
 least once in training, and flagged any new value in the test data as 
 suspicious, with a score inversely proportional to the size of the set of 
 values observed in training and proportional to the time since the previous 
 anomaly.

 Not surprisingly, the simple algorithm failed on real network traffic. There 
 were too many false alarms for it to be even remotely useful. The reason it 
 worked on the synthetic traffic was that it was algorithmically simple 
 compared to real traffic. For example, one of the most effective tests was 
 the TTL value, a counter that decrements with each IP routing hop, intended 
 to prevent routing loops. It turned out that most of the attacks were 
 simulated from a machine that was one hop further away than the machines 
 simulating normal traffic.

 A problem like that could have been fixed, but there were a dozen others that 
 I found, and probably many that I didn't find. It's not that the test set 
 developers weren't careful. They spent probably $1 million developing it 
 (several people over 2 years). It's that you can't simulate the high 
 complexity of thousands of computers and human users with anything less than 
 that. Simple problems have simple solutions, but that's not AGI.

 -- Matt Mahoney, matmaho...@yahoo.com


 --- On Fri, 1/9/09, Ben Goertzel b...@goertzel.org wrote:

 From: Ben Goertzel b...@goertzel.org
 Subject: [agi] What Must a World Be That a Humanlike Intelligence May 
 Develop In It?
 To: agi@v2.listbox.com
 Date: Friday, January 9, 2009, 5:58 PM
 Hi all,

 I intend to submit the following paper to JAGI shortly, but
 I figured
 I'd run it past you folks on this list first, and
 incorporate any
 useful feedback into the draft I submit

 This is an attempt to articulate a virtual world
 infrastructure that
 will be adequate for the development of human-level AGI

 http://www.goertzel.org/papers/BlocksNBeadsWorld.pdf

 Most of the paper is taken up by conceptual and
 requirements issues,
 but at the end specific world-design proposals are made.

 This complements my earlier paper on AGI Preschool.  It
 attempts to
 define what kind of underlying virtual world infrastructure
 an
 effective AGI preschool would minimally require.

 thx
 Ben G



 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 b...@goertzel.org

 I intend to live forever, or die trying.
 -- Groucho Marx



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

This is no place to stop -- half way between ape and angel
-- Benjamin Disraeli


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Matt Mahoney
--- On Tue, 1/13/09, Ben Goertzel b...@goertzel.org wrote:

 The complexity of a simulated environment is tricky to estimate, if
 the environment contains complex self-organizing dynamics, random
 number generation, and complex human interactions ...

In fact it's not computable. But if you write 10^6 bits of code for your 
simulator, you know it's less than 10^6 bits.

But I wonder which is a better test of AI.

http://cs.fit.edu/~mmahoney/compression/text.html
is based on natural language prediction, equivalent to the Turing test. The 
data has 10^9 bits of complexity, just enough to train a human adult language 
model.

http://cs.fit.edu/~mmahoney/compression/uiq/
is based on Legg and Hutter's universal intelligence. It probably has a few 
hundred bits of complexity, designed to be just beyond the reach of 
tractability for universal algorithms like AIXI^tl.

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-12 Thread Russell Wallace
I think this sort of virtual world is an excellent idea.

I agree with Benjamin Johnston's idea of a unified object model where
everything consists of beads.

I notice you mentioned distributing the computation. This would
certainly be valuable in the long run, but for the first version I
would suggest having each simulation instance run on a single machine
with the fastest physics capable GPU on the market, and accepting that
it will still run slower than real time. Let an experiment be an
overnight run, and use multiple machines by running multiple
experiments at the same time. That would make the programming for the
first version more tractable.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-12 Thread Ben Goertzel
The problem with simulations that run slower than real time is that
they aren't much good for running AIs interactively with humans... and
for AGI we want the combination of social and physical interaction

However, I agree that for an initial prototype implementation of bead
physics that would be the best approach...

On Mon, Jan 12, 2009 at 5:30 AM, Russell Wallace
russell.wall...@gmail.com wrote:
 I think this sort of virtual world is an excellent idea.

 I agree with Benjamin Johnston's idea of a unified object model where
 everything consists of beads.

 I notice you mentioned distributing the computation. This would
 certainly be valuable in the long run, but for the first version I
 would suggest having each simulation instance run on a single machine
 with the fastest physics capable GPU on the market, and accepting that
 it will still run slower than real time. Let an experiment be an
 overnight run, and use multiple machines by running multiple
 experiments at the same time. That would make the programming for the
 first version more tractable.


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

This is no place to stop -- half way between ape and angel
-- Benjamin Disraeli


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-12 Thread Benjamin Johnston



I think this sort of virtual world is an excellent idea.

I agree with Benjamin Johnston's idea of a unified object model where
everything consists of beads.

I notice you mentioned distributing the computation. This would
certainly be valuable in the long run, but for the first version I
would suggest having each simulation instance run on a single machine
with the fastest physics capable GPU on the market, and accepting that
it will still run slower than real time. Let an experiment be an
overnight run, and use multiple machines by running multiple
experiments at the same time. That would make the programming for the
first version more tractable.
 

Actually, I think it would be easier, more useful and more portable to 
distribute the computation rather than trying to make it to run on a GPU.


This kind of computation is very local - objects can primarily interact 
only with objects that are nearby. A straightforward distribution scheme 
would be to partition the world spatially - slice up the world into 
different regions and run each region on a different PC. A first version 
might use a fixed partitioning, but more sophisticated versions could 
have partitions that grow or shrink according to their computational 
load, or could have partitions that are based on a balanced spatial 
indexing scheme.


I believe that my scheme is fairly scalable - in the order of O(n log n) 
(i.e., a fairly fixed local computation for each node in the graph, 
including a search in a spatial index to find its nearest neighbors). My 
current implementation is O(n^2) because I haven't done any indexing.


The great thing about solving distribution up front is that you improve 
the performance without any development costs, simply by throwing more 
computers at it. Consider, for example, that it only costs US$20/hour to 
rent 100 High CPU computers from Amazon EC2. That is a lot of 
computing power!


-Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-12 Thread Russell Wallace
On Tue, Jan 13, 2009 at 1:22 AM, Benjamin Johnston
johns...@it.uts.edu.au wrote:
 Actually, I think it would be easier, more useful and more portable to
 distribute the computation rather than trying to make it to run on a GPU.

If it would be easier, fair enough; I've never programmed a GPU, I
don't really know how difficult that is.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


RE: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-11 Thread Benjamin Johnston
Hi Ben,

I've been looking at the same problem from a different angle... rather than
searching for simplified artificial worlds for an agent to live in, I've
been searching for models of the world to be used directly for reasoning
(i.e., the internal world for an agent situated in the real world).


I'll be interested to see what you do with CogDevWorld, however you may be
interested in some of the specifics of my approach:


I take the approach that *everything* should be beads. I model the world
with 'beads' (that are point masses, and carry heat, color, etc), 'joins' to
structure them together (and that can change under stress or heat) and
'surfaces' (convex hulls around small set of beads) that stop things from
being permeable. 

Solid objects can be constructed from beads with rigid (but 'snap'-able)
joins; flexible objects by beads with flexible joins; gases by weightless
unjoined beads, and liquids/adhesives by 'local joins' that attract and
repel their neighbours (to create incompressible liquids with surface
tension).

I like this approach because everything is uniform - not only does the same
mechanism simulate liquids, solids and gases; but more importantly, new laws
of physics can be added 'orthogonally' to the existing laws. The approach is
flexible and open-ended.

I haven't had the resources or time to explore the different laws of physics
for such models, but it isn't hard to create realistic objects in simple
environments. This approach is far more computationally expensive than, say,
rigid-body physics of complete object models; but the locality of the
computations means that it should scale very well. (In fact, the
orthogonality of the laws of physics means that you could support many more
physical laws than could be reasonable computed simultaneously, but only
enable laws as they are relevant to particular problems - e.g., turning on
laws of heat diffusion and state-of-matter changes only when the agent is
interacting with, say, the fridge or fire).

I outlined the basic principle in this paper:
http://www.comirit.com/papers/commonsense07.pdf
Since then, I've changed some of the details a bit (some were described in
my AGI-08 paper), added convex hulls and experimented with more laws of
physics; but the basic idea has stayed the same.

-Ben

-Original Message-
From: Ben Goertzel [mailto:b...@goertzel.org] 
Sent: Saturday, 10 January 2009 9:58 AM
To: agi@v2.listbox.com
Subject: [agi] What Must a World Be That a Humanlike Intelligence May
Develop In It?

Hi all,

I intend to submit the following paper to JAGI shortly, but I figured
I'd run it past you folks on this list first, and incorporate any
useful feedback into the draft I submit

This is an attempt to articulate a virtual world infrastructure that
will be adequate for the development of human-level AGI

http://www.goertzel.org/papers/BlocksNBeadsWorld.pdf

Most of the paper is taken up by conceptual and requirements issues,
but at the end specific world-design proposals are made.

This complements my earlier paper on AGI Preschool.  It attempts to
define what kind of underlying virtual world infrastructure an
effective AGI preschool would minimally require.

thx
Ben G



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-11 Thread Linas Vepstas
2009/1/10 Nathan Cook nathan.c...@gmail.com:
 What about vibration? We have specialized mechanoreceptors to detect
 vibration (actually vibration and pressure - presumably there's processing
 to separate the two). It's vibration that lets us feel fine texture, via the
 stick-slip friction between fingertip and object.

There are many different senses that can come into play.
For humans, the primary ones are sight, sound and gross
physical locations (bumping into something).

Ben makes a point of saying human-like AGI.

If we were shooting for fish-like, we'd need to include
the lateral line, which is a sensory organ that humans
simply don't have (its used to detect movement and
vibration)

Of course, I'm interested in science-like AGI -- so,
for example, in atomic-force microscopes (AFM), its been
noticed that stiction (the stick/slip friction that you
talk about) is a very good way of sensing atomic-scale
properties.  There's been some effort to attach an AFM
to a spring-and-lever-mounted, motorized ball/handle-grip,
haptic interface so that humans could directly sense,
via arm and  wrist muscles, atomic-scale Casimir forces
 etc.

My point is that the world of sensory input can be much
richer than a small number of basic senses.  We can
already augument human vision with infrared googles,
but it would be even cooler to see in four primary colors
(appearently 1 on 50 women (but no men) have more
than three color receptors in their retina)  Now that would
be something.

--linas


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-10 Thread Nathan Cook
What about vibration? We have specialized mechanoreceptors to detect
vibration (actually vibration and pressure - presumably there's processing
to separate the two). It's vibration that lets us feel fine texture, via the
stick-slip friction between fingertip and object.

On a related note, even a very fine powder of very low friction feels
different to water - how can you capture the sensation of water using beads
and blocks of a reasonably large size?

-- 
Nathan Cook



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-10 Thread Ben Goertzel
On Sat, Jan 10, 2009 at 4:27 PM, Nathan Cook nathan.c...@gmail.com wrote:
 What about vibration? We have specialized mechanoreceptors to detect
 vibration (actually vibration and pressure - presumably there's processing
 to separate the two). It's vibration that lets us feel fine texture, via the
 stick-slip friction between fingertip and object.

Actually, letting beads vibrate at various frequencies would seem
perfectly reasonable ... and could lead to interesting behaviors in
sets of flexibly coupled beads.

I think this would be a good addition to the model, thanks!

 On a related note, even a very fine powder of very low friction feels
 different to water - how can you capture the sensation of water using beads
 and blocks of a reasonably large size?

The objective of a CogDevWorld such as BlocksNBeadsWorld is explicitly
**not** to precisely simulate the sensations of being in the real
world.

My question to you is: What important cognitive ability is drastically
more easily developable given a world that contains a distinction
between fluids and various sorts of bead-conglomerates?

-- Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-10 Thread Lukasz Stafiniak
On Sat, Jan 10, 2009 at 11:02 PM, Ben Goertzel b...@goertzel.org wrote:
 On a related note, even a very fine powder of very low friction feels
 different to water - how can you capture the sensation of water using beads
 and blocks of a reasonably large size?

 The objective of a CogDevWorld such as BlocksNBeadsWorld is explicitly
 **not** to precisely simulate the sensations of being in the real
 world.

 My question to you is: What important cognitive ability is drastically
 more easily developable given a world that contains a distinction
 between fluids and various sorts of bead-conglomerates?

The objection is not valid in equating beads with dry powder. Certain
forms of adhesion of the beads form a good approximation to fluids.
You can have your hand wet with sticky beads etc.

The model feels underspecified to me, but I'm OK with that, the ideas
conveyed. It doesn't feel fair to insist there's no fluid dynamics
modeled though ;-)

Best regards.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-10 Thread Nathan Cook
2009/1/10 Lukasz Stafiniak lukst...@gmail.com:
 On Sat, Jan 10, 2009 at 11:02 PM, Ben Goertzel b...@goertzel.org wrote:
 On a related note, even a very fine powder of very low friction feels
 different to water - how can you capture the sensation of water using beads
 and blocks of a reasonably large size?

 The objective of a CogDevWorld such as BlocksNBeadsWorld is explicitly
 **not** to precisely simulate the sensations of being in the real
 world.

 My question to you is: What important cognitive ability is drastically
 more easily developable given a world that contains a distinction
 between fluids and various sorts of bead-conglomerates?

 The objection is not valid in equating beads with dry powder. Certain
 forms of adhesion of the beads form a good approximation to fluids.
 You can have your hand wet with sticky beads etc.


This would require at least a two-factor adhesion-cohesion model. But
Ben has a good rejoinder to my comment.
-- 
Nathan Cook


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-10 Thread Ben Goertzel
 The model feels underspecified to me, but I'm OK with that, the ideas
 conveyed. It doesn't feel fair to insist there's no fluid dynamics
 modeled though ;-)

Yes, the next step would be to write out detailed equations for the
model.  I didn't do that in the paper because I figured that would be
a fairly empty exercise unless I also implemented some kind of simple
simulation of the model.  With this sort of thing, it's easy to write
down equations that look good, but one doesn't really know if they
make sense till one's run some simulations, done some parameter
tuning, etc.

Which seems like a quite fun exercise, but I just didn't get to it
yet... actually it would be sensible to do this together with some
nice visualization...

ben


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-10 Thread Ronald C. Blue
It turns out that nerve cells require physical vibrations to work correctly.  
An odd discovery to say the least.  But movement of an electrostatic charge in 
a standing electromagnetic polarization field may be useful for measuring the 
vibrations of odor molecules for the odor system.  Part of an odor molecule 
moves in an out of the pore of a nerve cell.  An odor signal then would be a 
summation of averages of the different parts being stored on a standing wave 
pattern of about 30 hertz.  You can duplicate any odor if you can get the same 
ratio of the small parts of the original molecule.

  - Original Message - 
  From: Nathan Cook 
  To: agi@v2.listbox.com 
  Sent: Saturday, January 10, 2009 4:27 PM
  Subject: Re: [agi] What Must a World Be That a Humanlike Intelligence May 
Develop In It?


  What about vibration? We have specialized mechanoreceptors to detect 
vibration (actually vibration and pressure - presumably there's processing to 
separate the two). It's vibration that lets us feel fine texture, via the 
stick-slip friction between fingertip and object.

  On a related note, even a very fine powder of very low friction feels 
different to water - how can you capture the sensation of water using beads and 
blocks of a reasonably large size?

  -- 
  Nathan Cook


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


[agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-09 Thread Ben Goertzel
Hi all,

I intend to submit the following paper to JAGI shortly, but I figured
I'd run it past you folks on this list first, and incorporate any
useful feedback into the draft I submit

This is an attempt to articulate a virtual world infrastructure that
will be adequate for the development of human-level AGI

http://www.goertzel.org/papers/BlocksNBeadsWorld.pdf

Most of the paper is taken up by conceptual and requirements issues,
but at the end specific world-design proposals are made.

This complements my earlier paper on AGI Preschool.  It attempts to
define what kind of underlying virtual world infrastructure an
effective AGI preschool would minimally require.

thx
Ben G



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-09 Thread Eric Burton
Goertzel this is an interesting line of investigation. What about in
world sound perception?

On 1/9/09, Ben Goertzel b...@goertzel.org wrote:
 Hi all,

 I intend to submit the following paper to JAGI shortly, but I figured
 I'd run it past you folks on this list first, and incorporate any
 useful feedback into the draft I submit

 This is an attempt to articulate a virtual world infrastructure that
 will be adequate for the development of human-level AGI

 http://www.goertzel.org/papers/BlocksNBeadsWorld.pdf

 Most of the paper is taken up by conceptual and requirements issues,
 but at the end specific world-design proposals are made.

 This complements my earlier paper on AGI Preschool.  It attempts to
 define what kind of underlying virtual world infrastructure an
 effective AGI preschool would minimally require.

 thx
 Ben G



 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 b...@goertzel.org

 I intend to live forever, or die trying.
 -- Groucho Marx


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-09 Thread Ben Goertzel
It's actually  mentioned there, though not emphasized... there's a
section on senses...

ben g

On Fri, Jan 9, 2009 at 8:10 PM, Eric Burton brila...@gmail.com wrote:
 Goertzel this is an interesting line of investigation. What about in
 world sound perception?

 On 1/9/09, Ben Goertzel b...@goertzel.org wrote:
 Hi all,

 I intend to submit the following paper to JAGI shortly, but I figured
 I'd run it past you folks on this list first, and incorporate any
 useful feedback into the draft I submit

 This is an attempt to articulate a virtual world infrastructure that
 will be adequate for the development of human-level AGI

 http://www.goertzel.org/papers/BlocksNBeadsWorld.pdf

 Most of the paper is taken up by conceptual and requirements issues,
 but at the end specific world-design proposals are made.

 This complements my earlier paper on AGI Preschool.  It attempts to
 define what kind of underlying virtual world infrastructure an
 effective AGI preschool would minimally require.

 thx
 Ben G



 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 b...@goertzel.org

 I intend to live forever, or die trying.
 -- Groucho Marx


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-09 Thread Ronald C. Blue
Not really related to your topic, but it sort of isMany years ago 
Disney made a movie about an alien cat that was telepathic and came to earth 
in a Flying saucer.

A stupid movie because cats can not develop the technology to do this.

Recently I realized that while cat can not do this a species like us can 
transplant computer interface chips into cats to allow them to communicate 
to our robots, computers, and manufacturing
robot factory.  We die out as a species for unknown reasons but the cat 
continues our civilization.


In other words a cat can pilot a UFO and land on earth to investigate the 
life on earth.


Ron Blue
Assistant Professor of Psychology
Lehigh Carbon Community College
rb...@lccc.edu



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com