Hank Conn wrote:
Yes, now the point being that if you have an AGI and you aren't in a
sufficiently fast RSI loop, there is a good chance that if someone else
were to launch an AGI with a faster RSI loop, your AGI would lose
control to the other AGI where the goals of the other AGI differed
James Ratcliff wrote:
You could start a smaller AI with a simple hardcoded desire or reward
mechanism to learn new things, or to increase the size of its knowledge.
That would be a simple way to programmaticaly insert it. That along
with a seed AI, must be put in there in the beginning.
On Nov 30, 2006, at 12:21 PM, Richard Loosemore wrote:
Recursive Self Inmprovement?
The answer is yes, but with some qualifications.
In general RSI would be useful to the system IF it were done in such
a way as to preserve its existing motivational priorities.
How could the system
On Nov 30, 2006, at 10:15 PM, Hank Conn wrote:
Yes, now the point being that if you have an AGI and you aren't in a
sufficiently fast RSI loop, there is a good chance that if someone
else were to launch an AGI with a faster RSI loop, your AGI would
lose control to the other AGI where the
On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Hank Conn [EMAIL PROTECTED] wrote:
The further the actual target goal state of that particular AI is away
from
the actual target goal state of humanity, the worse.
The goal of ... humanity... is that the AGI implemented that will have
On 11/30/06, Mark Waser [EMAIL PROTECTED] wrote:
With many SVD systems, however, the representation is more vector-like
and *not* conducive to easy translation to human terms. I have two answers
to these cases. Answer 1 is that it is still easy for a human to look at
the closest matches to
This seems rather circular and ill-defined.
- samantha
Yeah I don't really know what I'm talking about at all.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
--- Philip Goetz [EMAIL PROTECTED] wrote:
On 11/30/06, James Ratcliff [EMAIL PROTECTED] wrote:
One good one:
Consciousness is a quality of the mind generally regarded to comprise
qualities such as subjectivity, self-awareness, sentience, sapience, and
the
ability to perceive the
On 12/1/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
I don't think so. The singulatarians tend to have this mental model of a
superintelligence that is essentially an analogy of the difference between an
animal and a human. My model is different. I think there's a level of
universality,
--- Hank Conn [EMAIL PROTECTED] wrote:
On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote:
The goals of humanity, like all other species, was determined by
evolution.
It is to propagate the species.
That's not the goal of humanity. That's the goal of the evolution of
humanity, which
On Friday 01 December 2006 20:06, Philip Goetz wrote:
Thus, I don't think my ability to follow rules written on paper to
implement a Turing machine proves that the operations powering my
consciousness are Turing-complete.
Actually, I think it does prove it, since your simulation of a Turing
Samantha Atkins wrote:
On Nov 30, 2006, at 12:21 PM, Richard Loosemore wrote:
Recursive Self Inmprovement?
The answer is yes, but with some qualifications.
In general RSI would be useful to the system IF it were done in such a
way as to preserve its existing motivational priorities.
Matt Mahoney wrote:
I guess we are arguing terminology. I mean that the part of the brain which
generates the reward/punishment signal for operant conditioning is not
trainable. It is programmed only through evolution.
There is no such thing. This is the kind of psychology that died out at
Matt Mahoney wrote:
--- Hank Conn [EMAIL PROTECTED] wrote:
The further the actual target goal state of that particular AI is away from
the actual target goal state of humanity, the worse.
The goal of ... humanity... is that the AGI implemented that will have the
strongest RSI curve also will
A little late on the draw here - I am a new member to the list and was
checking out the archives. I had an insight into this debate over
understanding.
James Ratcliff wrote:
Understanding is a dum-dum word, it must be specifically defined as a
concept
or not used. Understanding art is a
15 matches
Mail list logo