[agi] high-performance reality simulators

2006-06-12 Thread Eugen Leitl
If you're using a virtual environment for AGI testing, are you rolling your own (if yes, open-sourced?), or using an off-shelf one? Are you using massive parallelism, and clusters, or hardware acceleration (either game physics, or GPU), or are you running one instance/machine? What is your

Re: [agi] high-performance reality simulators

2006-06-12 Thread Ben Goertzel
Hi, If you're using a virtual environment for AGI testing, are you rolling your own (if yes, open-sourced?), or using an off-shelf one? A little of both... we have built our own (open source) 3D simulation world environment, AGISim sourceforge.net/projects/agisim/ but it's based on the

Re: [agi] Motivational system

2006-06-12 Thread James Ratcliff
A couple things below William,1) I agree that direct reward has to be in-built(into brain / AI system).2) I don't see why direct reward cannot be used for rewarding mentalachievements. I think that this "direct rewarding mechanism" ispreprogrammed in genes and cannot be used directly by mind.This

Re: [agi] Reward versus Punishment? .... Motivational system

2006-06-12 Thread James Ratcliff
Will, Right now I would think that a negative reward would be usable for this aspect. I am using the positive negative reward system right now for motivational/planning aspects for the AGI. So if sitting at a desk considering a plan of action that might hurt himself or another, the plan would have

Re: [agi] list vs. forum

2006-06-12 Thread James Ratcliff
I concur, hvaing these readily available in my mailbox, and able to replay to them in a simple manner is very useful. I would like to see them all indexed in Google as well though, to draw in alarger viewing audience.James RatcliffRussell Wallace [EMAIL PROTECTED] wrote: On 6/10/06, sanjay

Re: [agi] Mentifex AI Breakthrough on Wed.7.JUN.2006

2006-06-12 Thread Ben Goertzel
More importantly, why hasn't this guy been banned from the list yet? I'm new here, so if there's any no bans policy I don't know please excuse the question. http://www.nothingisreal.com/mentifex_faq.html I would assume that you all would have read this page with details about this spammer?

Re: [agi] Re: Four axioms (WAS Two draft papers . . . .)

2006-06-12 Thread James Ratcliff
James: There still would be abortion/noabortion xlaw/no xlaw that would be deemed unfriendly. Mark: No. There still would be abortion/noabortion xlaw/no xlaw that would be decried by some as undesirable, horrible, or immoral. once a sufficient number of lawmakers are friendly -- only friendly

Re: [agi] Friendly AI in an unfriendly world... AI to the future socieities.... Four axioms (WAS Two draft papers . . . .)

2006-06-12 Thread James Ratcliff
AIs only have the instincts and goals that they are built with. Theyare not, e.g., inherently territorial. They do not even inherently wantto survive. If you want them to have the goal of surviving, then youmust include that among their goals. You appear to be presuming that an FAI would have

[agi] How do you evaluate?... Reward Punishment? .... Motivational system

2006-06-12 Thread DGoe
How do you score any given AI System test run? Dan Goe From : James Ratcliff [EMAIL PROTECTED] To : agi@v2.listbox.com Subject : Re: [agi] Reward versus Punishment? Motivational system Date : Mon, 12 Jun 2006 06:13:45 -0700 (PDT)

[agi] Building in Biases....Friendly AI in an unfriendly world... AI to the future socieities

2006-06-12 Thread DGoe
I would assume any one builder of any AI system would unconsciously build in his own belief system and understandingly his biases. This being seed AI, over time this might be mutated depending on the designers influence and tolerance toward where the AI System might be directed or

Re: [agi] Re: Four axioms (WAS Two draft papers . . . .)

2006-06-12 Thread Mark Waser
James: So are you seperating 'undesirable, horrible, or immoral' from the term of friendliness? I am removing the requirement from friendliness that it match everybody's opinion on "undesirable, horrible, or immoral" since that is clearly an impossible undertaking. However, friendly is

Re: [agi] Reward versus Punishment? .... Motivational system

2006-06-12 Thread William Pearson
On 12/06/06, James Ratcliff [EMAIL PROTECTED] wrote: Will, Right now I would think that a negative reward would be usable for this aspect. I agree it is usable. But I am not sure it necessary, you can just normalise the reward value. Let say for most states you normally give 0 for a

Re: [agi] Evaluating the moduals... How do you evaluate?... Reward Punishment?

2006-06-12 Thread James Ratcliff
There are many modules for the AGI, but the main one there that would be using the scoring formula is the Action_Selector or the Controller (wiki action selection article :http://en.wikipedia.org/wiki/Action_selection)Most of the other models are not scored according to this type of formula.The

Re: [agi] Re: Four axioms (WAS Two draft papers . . . .)

2006-06-12 Thread James Ratcliff
Mark, Ok I have a little better understanding of what you are trying to accomplish. If not it would be stuck forever with its first created beliefs, which looking back on the human race, def does not seem to be a good idea. Yes, I DEFINITELY want the AGI to be stuck forever with its first

[agi] Right and wrong good or bad... Reward Punishment?

2006-06-12 Thread DGoe
Generally, we reward good behavior and punish bad behavior. Doing so with AI would seem to be to be most wise to direct the learning and development to maximize knowing what is good and what is bad. Other wise the AI system does not know what a bad module might be for not being informed by

Re: Worthwhile time sinks was Re: [agi] list vs. forum

2006-06-12 Thread Ben Goertzel
Hi Sanjay, On 6/12/06, William Pearson [EMAIL PROTECTED] wrote: On 10/06/06, sanjay padmane [EMAIL PROTECTED] wrote: I feel you should discontinue the list. That will force people to post there. I'm not using the forum only because no one else is using it (or very few), and everyone is

[agi] Developing AI... Worthwhile time sinks was

2006-06-12 Thread DGoe
Wouldn't evolutionary seed AI find the parallel process and test that for advanatages? Dan Goe From : William Pearson [EMAIL PROTECTED] To : agi@v2.listbox.com Subject : Worthwhile time sinks was Re: [agi] list vs. forum Date : Mon, 12 Jun

Re: Worthwhile time sinks was Re: [agi] list vs. forum

2006-06-12 Thread Yan King Yin
If we want to increase content and get more people interested I think the best thing to devote our time and effort to is a wiki rather than a forum. Threads have little chance of staying on topic and finding things in them as they meander around becomes nightmarish. As we can't present a

Re: Worthwhile time sinks was Re: [agi] list vs. forum

2006-06-12 Thread sanjay padmane
Even though only a few have reacted to my (somewhat threatening ;-) ) proposal to discontinue this list, it seems that people are comfortable with it, anyhow...Someone can experiment with automated posting of all forum messages to the list, as and when they are created. Speaking of high quality,

Re: [agi] Re: Four axioms (WAS Two draft papers . . . .)

2006-06-12 Thread Mark Waser
From: James Ratcliff To: agi@v2.listbox.com Sent: Monday, June 12, 2006 3:52 PMSubject: Re: [agi] Re: Four axioms (WAS Two draft papers . . . .)you mentioned in a couple of responses the volition of the masses as your overall formula, I am putting a couple of thoughts together here, and