If you're using a virtual environment for AGI testing,
are you rolling your own (if yes, open-sourced?), or
using an off-shelf one?
Are you using massive parallelism, and clusters, or
hardware acceleration (either game physics, or GPU), or
are you running one instance/machine?
What is your
Hi,
If you're using a virtual environment for AGI testing,
are you rolling your own (if yes, open-sourced?), or
using an off-shelf one?
A little of both... we have built our own (open source) 3D simulation
world environment, AGISim
sourceforge.net/projects/agisim/
but it's based on the
A couple things below William,1) I agree that direct reward has to be in-built(into brain / AI system).2) I don't see why direct reward cannot be used for rewarding mentalachievements. I think that this "direct rewarding mechanism" ispreprogrammed in genes and cannot be used directly by mind.This
Will, Right now I would think that a negative reward would be usable for this aspect. I am using the positive negative reward system right now for motivational/planning aspects for the AGI. So if sitting at a desk considering a plan of action that might hurt himself or another, the plan would have
I concur, hvaing these readily available in my mailbox, and able to replay to them in a simple manner is very useful. I would like to see them all indexed in Google as well though, to draw in alarger viewing audience.James RatcliffRussell Wallace [EMAIL PROTECTED] wrote: On 6/10/06, sanjay
More importantly, why hasn't this guy been banned from the list yet?
I'm new here, so if there's any no bans policy I don't know please
excuse the question.
http://www.nothingisreal.com/mentifex_faq.html
I would assume that you all would have read this page with details
about this spammer?
James: There still would be abortion/noabortion xlaw/no xlaw that would be deemed unfriendly. Mark: No. There still would be abortion/noabortion xlaw/no xlaw that would be decried by some as undesirable, horrible, or immoral. once a sufficient number of lawmakers are friendly -- only friendly
AIs only have the instincts and goals that they are built with. Theyare not, e.g., inherently territorial. They do not even inherently wantto survive. If you want them to have the goal of surviving, then youmust include that among their goals. You appear to be presuming that an FAI would have
How do you score any given AI System test run?
Dan Goe
From : James Ratcliff [EMAIL PROTECTED]
To : agi@v2.listbox.com
Subject : Re: [agi] Reward versus Punishment? Motivational system
Date : Mon, 12 Jun 2006 06:13:45 -0700 (PDT)
I would assume any one builder of any AI system would unconsciously
build in his own belief system and understandingly his biases.
This being seed AI, over time this might be mutated depending on the
designers influence and tolerance toward where the AI System might be
directed or
James: So are you seperating
'undesirable, horrible, or immoral'
from the term of
friendliness?
I am removing the requirement
from friendliness that it match everybody's opinion on
"undesirable, horrible, or immoral" since that is clearly an impossible
undertaking. However, friendly is
On 12/06/06, James Ratcliff [EMAIL PROTECTED] wrote:
Will,
Right now I would think that a negative reward would be usable for this
aspect.
I agree it is usable. But I am not sure it necessary, you can just
normalise the reward value.
Let say for most states you normally give 0 for a
There are many modules for the AGI, but the main one there that would be using the scoring formula is the Action_Selector or the Controller (wiki action selection article :http://en.wikipedia.org/wiki/Action_selection)Most of the other models are not scored according to this type of formula.The
Mark, Ok I have a little better understanding of what you are trying to accomplish. If not it would be stuck forever with its first created beliefs, which looking back on the human race, def does not seem to be a good idea. Yes, I DEFINITELY want the AGI to be stuck forever with its first
Generally, we reward good behavior and punish bad behavior.
Doing so with AI would seem to be to be most wise to direct the learning
and development to maximize knowing what is good and what is bad.
Other wise the AI system does not know what a bad module might be for not
being informed by
Hi Sanjay,
On 6/12/06, William Pearson [EMAIL PROTECTED] wrote:
On 10/06/06, sanjay padmane [EMAIL PROTECTED] wrote:
I feel you should discontinue the list. That will force people to post there.
I'm not using the forum only because no one else is using it (or very
few), and everyone is
Wouldn't evolutionary seed AI find the parallel process and test that for
advanatages?
Dan Goe
From : William Pearson [EMAIL PROTECTED]
To : agi@v2.listbox.com
Subject : Worthwhile time sinks was Re: [agi] list vs. forum
Date : Mon, 12 Jun
If we want to increase content and get more people interested I think the best thing to devote our time and effort to is a wiki rather than a forum. Threads have little chance of staying on topic and finding
things in them as they meander around becomes nightmarish. As we can't present a
Even though only a few have reacted to my (somewhat threatening ;-) ) proposal to discontinue this list, it seems that people are comfortable with it, anyhow...Someone can experiment with automated posting of all forum messages to the list, as and when they are created.
Speaking of high quality,
From: James Ratcliff To: agi@v2.listbox.com
Sent: Monday, June 12, 2006 3:52 PMSubject: Re: [agi] Re: Four axioms
(WAS Two draft papers . . . .)you mentioned in a couple of
responses the volition of the masses as your overall formula, I am putting a
couple of thoughts together here, and
20 matches
Mail list logo