On Fri, 09 Jun 2006 19:13:19 -500, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
What about punishment?
Currently I see it as the programs in control of outputting (and hence
the ones to get reward), losing the control and the chance to get
reinforcement. However experiment or better theory
Phil,
The answer is
* I believe the Forum is a superior mode of communication, IF PEOPLE
WILL USE IT, because of the much nicer threading and archiving
facilities
* People in this community seem to prefer to use a list to a forum
So, the Forum exists in the hopes that eventually discussion
I feel you should discontinue the list. That will force people to post there.
I'm not using the forum only because no one else is using it (or very
few), and everyone is perhaps doing the same.
Another advantage is that it will expose the discussions to google and
it will draw more people with
On 6/10/06, sanjay padmane [EMAIL PROTECTED] wrote:
I feel you should discontinue the list. That will force people to post there.I'm not using the forum only because no one else is using it (or veryfew), and everyone is perhaps doing the same.
And I feel the forum should be discontinued, so as to
From: James Ratcliff
To: agi@v2.listbox.com
Sent: Friday, June 09, 2006 4:13 PM
Subject: Re: [agi] Two draft papers: AI and
existential risk; heuristics and biases
Hmm, now what again is your goal, I am confused?
To maximally increase Volition
actualization/wish fulfillment (Axiom 1).
Thats right, I forgot, the archives are searchable. But the formats
are not so good. Forums are more organized. There is also a chance of
mixing up the lists, if subscribed to many lists. You have some more
pros/cons here , but I guess its a matter of habit :-)
The forum can return a
- Original Message - From: "Jef
Allbright" [EMAIL PROTECTED]Sent:
Thursday, June 08, 2006 10:04 PMSubject: Re: [agi] Four axioms
It seems to me it would be better to say that there is no absolute or
objective good-bad because evaluation of goodness is necessarily
relative to the
If your AI was operating on the web it might find itself at a sever
disadvantage with all of those con artist...
Your AI might lose bad...
While being friendly might be nice.
I think that is a vulnerable position to being taken advantage of...
If you are in a war game-simulation or real
[EMAIL PROTECTED] wrote:
If your AI was operating on the web it might find itself at a sever
disadvantage with all of those con artist...
Your AI might lose bad...
Friendly does not equal trusting. It does not equal stupid. It does
not equal not being willing to learn from the
- Original Message -
From: "Charles D Hixson" [EMAIL PROTECTED]
Sent: Thursday, June 08, 2006 7:26 PM
Subject: Re: [agi] Four axioms (Was Two draft
papers: AI and existential risk; heuristics and biases)
I think that Axiom 2 needs a bit of
work.
Agreed.
as I read it, it
10 matches
Mail list logo