Holland's Learning Classifier System is based on the application of a
simulated economic system to a simulated evolutionary system. The system is
basically a production system with evolving rules which "pay" each other,
with the root source of income being the external reward signal from the
environment, and reproductive opportunities based on accrued wealth. As
such, it can be thought of as a simulation of the real economy.

The original LCS doesn't consistently work. An optimum strategy is often
found, but then it is lost again as cheaters and over-general rules take
over the population and cause the network of optimal rules to collapse. XCS
(Accuracy-based Classifier System) is an improvement to this system which
doesn't suffer from the collapses of LCS. In this system, rather than
accumulating a "bank" and "paying" other rules for their activation, the
system works using the Reinforcement Learning paradigm, where each rule's
quality measure is adjusted to match the quality measure of the other rules
that rely on it, with a bias towards consistency/accuracy of this signal
over time.

I like to think of the economy as a (crudely effective) learning algorithm,
designed to coordinate behavior to mutual benefit of the participants. I
wonder if our economic system could benefit from this analogy. Could we put
together a system for measuring the "beneficence" (net positive
contribution to society at large) of each person, based on social feedback,
and then tie the availability of resources/services to this measure,
thereby entraining rational self interest under rational shared interest?
It could be argued that money already does this, but does a bad job of it
for reasons comparable to why LCS isn't reliable. Maybe treating the
economy as a multi-component Reinforcement Learning problem is the way to
go.



On Tue, Nov 6, 2012 at 3:47 AM, just camel <[email protected]> wrote:

>  offtopic
> a kind of paradigm shift away from short term profit maximization towards
> long term goals that benefit humanity as a whole? charles eisenstein
> suggests in his recent book that you will only see something like this
> after implementing a different kind of monetary system that will reinforce
> different incentives. nice read for a somewhat longer flight plus it does
> not cost anything.
>
> http://sacred-economics.com/
>
> /offtopic
>
>
> On 11/09/2012 08:13 PM, Piaget Modeler wrote:
>
>  That's called "Working within the current paradigm."
>
>  For AGI there must be a paradigm shift.
>
>  ~PM
>
>
>
>
> ------------------------------------------------------------------------------------------------------------------------------------------------
>
>  > Date: Sat, 10 Nov 2012 10:12:13 +0800
> > Subject: [agi] Re: Interesting interview with Nick Cassimatis about his
> new AGI startup, and the limitations of modern academia...
> > From: [email protected]
> > To: [email protected]
> >
> > This quote from the interview nicely summarizes why academia sucks as
> > a venue for making AGI progress...
> >
> > ***
> > Since our goal is to actually identify mechanisms that are powerful
> > enough to achieve human-level intelligence, the best way we have of
> > proving that our theory is correct is to actually implement it and
> > show it actually understands language at a human level. It’s actually
> > surprisingly difficult to get research like this published and
> > supported within normal academic communities because they are more
> > interested with smaller, incremental results that can be precisely
> > quantified. It is very difficult to get academic papers about complex
> > systems published in the quantities you need to thrive in academia.
> >
> > ... one of the problems with academia today is that one’s career
> > progress is disproportionately linked to bringing in money (almost
> > always government money). When one asks oneself how to best ensure
> > getting a grant, the answer is invariably, “Keep doing more of
> > whatever got money before.”
> > ***
> >
> > On Sat, Nov 10, 2012 at 10:08 AM, Ben Goertzel 
> > <[email protected]><[email protected]>wrote:
> > >
> http://www.forbes.com/sites/markchangizi/2012/11/09/for-siris-new-competitor-skyphrase-academia-isnt-big-enough-for-ai/
> > >
> > > --
> > > Ben Goertzel, PhD
> > > http://goertzel.org
> > >
> > > "My humanity is a constant self-overcoming" -- Friedrich Nietzsche
> >
> >
> >
> > --
> > Ben Goertzel, PhD
> > http://goertzel.org
> >
> > "My humanity is a constant self-overcoming" -- Friedrich Nietzsche
> >
> >
> > -------------------------------------------
> > AGI
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed:
> https://www.listbox.com/member/archive/rss/303/19999924-5cfde295
> > Modify Your Subscription: https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
>      *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23508161-fa52c03c> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription 
> <http://www.listbox.com>
>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to