Re: [agi] AGI interests

2007-04-10 Thread Hank Conn
as a person: nihilism the human condition. crime, drugs, debauchery. self-destructive and life-endangering behaviour; rejection of social norms. the world as I know it is a rather petty, woeful place and I pretty much think modern city-dwelling life is a stenchy wet mouthful of arse - not to say

Re: [agi] The Singularity

2006-12-05 Thread Hank Conn
Ummm... perhaps your skepticism has more to do with the inadequacies of **your own** AGI design than with the limitations of AGI designs in general? It has been my experience that one's expectations on the future of AI/Singularity is directly dependent upon one's understanding/design of AGI and

Re: [agi] RSI - What is it and how fast?

2006-12-04 Thread Hank Conn
Brian thanks for your response and Dr. Hall thanks for your post as well. I will get around to responding to this as soon as time permits. I am interested in what Michael Anissimov or Michael Wilson has to say. On 12/4/06, Brian Atkins [EMAIL PROTECTED] wrote: I think this is an interesting,

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Hank Conn
On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote: --- Hank Conn [EMAIL PROTECTED] wrote: On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote: The goals of humanity, like all other species, was determined by evolution. It is to propagate the species. That's not the goal of humanity

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Hank Conn
On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote: --- Hank Conn [EMAIL PROTECTED] wrote: The further the actual target goal state of that particular AI is away from the actual target goal state of humanity, the worse. The goal of ... humanity... is that the AGI implemented that will have

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Hank Conn
This seems rather circular and ill-defined. - samantha Yeah I don't really know what I'm talking about at all. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-30 Thread Hank Conn
On 11/19/06, Richard Loosemore [EMAIL PROTECTED] wrote: Hank Conn wrote: Yes, you are exactly right. The question is which of my assumption are unrealistic? Well, you could start with the idea that the AI has ... a strong goal that directs its behavior

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-30 Thread Hank Conn
On 11/30/06, Richard Loosemore [EMAIL PROTECTED] wrote: Hank Conn wrote: [snip...] I'm not asserting any specific AI design. And I don't see how a motivational system based on large numbers of diffuse constrains inherently prohibits RSI, or really has any relevance

Re: [agi] RSI - What is it and how fast?

2006-11-17 Thread Hank Conn
On 11/17/06, Richard Loosemore [EMAIL PROTECTED] wrote: Hank Conn wrote: Here are some of my attempts at explaining RSI... (1) As a given instance of intelligence, as defined as an algorithm of an agent capable of achieving complex goals in complex environments, approaches the theoretical

Re: [agi] RSI - What is it and how fast?

2006-11-17 Thread Hank Conn
On 11/17/06, Richard Loosemore [EMAIL PROTECTED] wrote: Hank Conn wrote: On 11/17/06, *Richard Loosemore* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Hank Conn wrote: Here are some of my attempts at explaining RSI... (1) As a given instance

[agi] RSI - What is it and how fast?

2006-11-16 Thread Hank Conn
Here are some of my attempts at explaining RSI... (1) As a given instance of intelligence, as defined as an algorithm of an agent capable of achieving complex goals in complex environments, approaches the theoretical limits of efficiency for this class of algorithms, intelligence approaches

Re: [agi] RSI - What is it and how fast?

2006-11-16 Thread Hank Conn
On 11/16/06, Russell Wallace [EMAIL PROTECTED] wrote: On 11/16/06, Hank Conn [EMAIL PROTECTED] wrote: How fast could RSI plausibly happen? Is RSI inevitable / how soon will it be? How do we truly maximize the benefit to humanity? The concept is unfortunately based on a category error

Re: [agi] Funky Intel hardware, a few years off...

2006-11-01 Thread Hank Conn
IBM's system [high thermal conductivity interface technology], while not yet ready for commercial production, is reportedly so efficient that officials expect it will double cooling efficiency. http://msnbc.msn.com/id/15484274/ Probably being hyped more than its actual performance, but thiswill

Re: [agi] Motivational Systems that are stable

2006-10-28 Thread Hank Conn
For an AGI it is very important that a motivational system be stable. The AGI should not be able to reprogram it. I believe these are two completely different things. You can never assume an AGI will be unable to reprogram its goal system- while you can be virtually certain an AGI will never