I was trying to say that even the simplest simulation might be worthwhile
because our algorithms for handling multiplicity of possibilities - outside
of contemporary relational databases - don't seem that great.

Jim Bromer


On Tue, May 27, 2014 at 11:05 PM, Piaget Modeler via AGI 
<[email protected]>wrote:

> Perhaps we have to define an amoeba agent with very few needs and actions
> in order to get measurable and verifiable results.
>
> Or we can just use one of the Turing Test variants that are around.
>
> ~PM
>
> ------------------------------
> Date: Tue, 27 May 2014 15:34:38 -0400
> Subject: [agi] Even a Simple Narrow Simulation Might Be Interesting
> From: [email protected]
> To: [email protected]
>
>
>
> I was trying to set up a simple ground rule case for a partial simulation
> of AGI. As I started by thinking of the simplest case I could imagine and I
> found that it was a little more interesting than I thought it would be.
>
> I realized that the old numerical range could be used to test some
> important ideas. The idea is that a set of narrow AI implementations could
> be used to develop and test multiple possibility indexing. Suppose that the
> program has learned various responses to thousands of situations. These
> responses may be weighted for variations in kinds of cases. So in a typical
> situation the program might detect hundreds of different characteristics
> (in the observable input environment) that it had learned to associate with
> some response so it would have to find strategies to choose the best
> responses for the situation and strategies to learn from the experience.
>
> Even if the situation - which is input to the simple AI program - is made
> of distinct components, the possible combinations that might be relevant to
> finding a good response could be very complex. For example, if it learned
> abc represented a situation that it should respond to (with some kind of
> response), it might wonder if abxc was a variant of that situation.
>
> The problem here is managing and indexing of multiple possible responses
> that were reasonable for a particular component-of-a-situation when the
> typical situation might consist of hundreds of situation components.
>
> So even a very simple AI simulation might lead to some interesting results.
> (Of course there would have to be some way to evaluate the responses so
> the simple simulation would either have to be tied to some game or it would
> have to be worked out carefully).
> Jim Bromer
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/24379807-f5817f28> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to