Harshad RJ wrote:
On Feb 3, 2008 10:22 PM, Richard Loosemore [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Harshad RJ wrote:
I read the conversation from the start and believe that Matt's
argument is correct.
Did you mean to send this only to me? It looks as though
On Feb 3, 2008 10:22 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
My argument was (at the beginning of the debate with Matt, I believe)
that, for a variety of reasons, the first AGI will be built with
peaceful motivations. Seems hard to believe, but for various technical
reasons I think we
On Feb 18, 2008 7:41 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
In other words you cannot have your cake and eat it too: you cannot
assume that this hypothetical AGI is (a) completely able to build its
own understanding of the world, right up to the human level and beyond,
while also
Matt Mahoney wrote:
On Feb 3, 2008 10:22 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
My argument was (at the beginning of the debate with Matt, I believe)
that, for a variety of reasons, the first AGI will be built with
peaceful motivations. Seems hard to believe, but for various technical
On Feb 18, 2008 10:11 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
You assume that the system does not go through a learning phase
(childhood) during which it acquires its knowledge by itself. Why do
you assume this? Because an AGI that was motivated only to seek
electricity and
On 18/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:
... might be true. Yes, a motivation of some form could be coded into
the system, but the paucity of expression in the level at which it is
coded, may still allow for unintended motivations to emerge out.
It seems that in the AGI
Bob Mottram wrote:
On 18/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:
... might be true. Yes, a motivation of some form could be coded into
the system, but the paucity of expression in the level at which it is
coded, may still allow for unintended motivations to emerge out.
It seems
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
On Feb 3, 2008 10:22 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
My argument was (at the beginning of the debate with Matt, I believe)
that, for a variety of reasons, the first AGI will be built with
peaceful
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
On Feb 3, 2008 10:22 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
My argument was (at the beginning of the debate with Matt, I believe)
that, for a variety of reasons, the first AGI will be built with
Harshad RJ wrote:
On Feb 18, 2008 10:11 PM, Richard Loosemore [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
You assume that the system does not go through a learning phase
(childhood) during which it acquires its knowledge by itself. Why do
you assume this? Because an AGI
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Perhaps worm is the wrong word. Unlike today's computer worms, it would
be
intelligent, it would evolve, and it would not necessarily be controlled
by or
serve the interests of its creator. Whether or not it is
On Jan 28, 2008, at 12:03 PM, Richard Loosemore wrote:
Your comments below are unfounded, and all the worse for being so
poisonously phrased. If you read the conversation from the
beginning you will discover why: Matt initially suggested the idea
that an AGI might be asked to develop a
Randall,
Your comments below are unfounded, and all the worse for being so
poisonously phrased. If you read the conversation from the beginning
you will discover why: Matt initially suggested the idea that an AGI
might be asked to develop a virus of maximum potential, for purposes of
13 matches
Mail list logo