Mike, It would basically create a very large semantic net, with weights and confidences and ect.The power here would be directly linked to the "generating" - power "generating" - electrictyand political type would generally match up with a different part fo the net, and be easily
I disagree that humans really have a "stable motivational system" or would have to have a much more strict interpretation of that phrase. Overall humans as a society have in general a stable system (discounting war and etc) But as individuals, too many humans are unstable in many small if not
This is why I finished my essay with a request for comments based on an
understanding of what I wrote.
This is not a comment on my proposal, only a series of unsupported
assertions that don't seem to hang together into any kind of argument.
Richard Loosemore.
Matt Mahoney wrote:
My
Thankyou. I've studied the paper and the tested 'improvements'. The experiments in the paper are certainly usefull and are of the kind of parameter-testing without modifying the actual model. My experiments, however, are somewhat different and you could say they explore a broader field of
Actually, it consists of two completely different networks: one close to a neural, andthe other a regular bayesian.The firststores\relates patterns, the second simply does inference using the conditional probability matrices.
On 10/28/06, Pei Wang [EMAIL PROTECTED] wrote:
Sounds interesting. I'm
- Original Message From: James Ratcliff [EMAIL PROTECTED]To: agi@v2.listbox.comSent: Saturday, October 28, 2006 10:23:58 AMSubject: Re: [agi] Motivational Systems that are stableI disagree that humans really have a "stable motivational system" or would have to have a much more strict
For an AGI it is very important that a motivational system be stable. The AGI should not be able to reprogram it.
I believe these are two completely different things. You can never assume an AGI will be unable to reprogram its goal system- while you can be virtually certain an AGI will never
Hank Conn wrote:
Although I understand, in vague terms, what idea Richard is attempting
to express, I don't see why having massive numbers of weak constraints
or large numbers of connections from [the] motivational system to
[the] thinking system. gives any more reason to believe it is