I'm graduating with my BS in Computer Science specialized in AI this fall,
and then probably doing the MSAI program here at UGA. Eventually I want to
be at the level where I can contribute real progress towards AGI/FAI.
Maybe after I get my MS try to join an AGI project (e.g Novamente, A2I2,
etc
"as a person: nihilism & the human condition. crime, drugs, debauchery.
self-destructive and life-endangering behaviour; rejection of social
norms. the world as I know it is a rather petty, woeful place and I
pretty much think modern city-dwelling life is a stenchy wet mouthful
of arse - not to sa
"I think we're within a decade of that tipping point already."
What are some things you are anticipating to happen within the next decade?
-hank
On 12/8/06, J. Storrs Hall <[EMAIL PROTECTED]> wrote:
On Thursday 07 December 2006 05:29, Brian Atkins wrote:
> The point being although this task
"Ummm... perhaps your skepticism has more to do with the inadequacies
of **your own** AGI design than with the limitations of AGI designs in
general?"
It has been my experience that one's expectations on the future of
AI/Singularity is directly dependent upon one's understanding/design of AGI
and
Brian thanks for your response and Dr. Hall thanks for your post as well. I
will get around to responding to this as soon as time permits. I am
interested in what Michael Anissimov or Michael Wilson has to say.
On 12/4/06, Brian Atkins <[EMAIL PROTECTED]> wrote:
I think this is an interesting,
On 12/1/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
--- Hank Conn <[EMAIL PROTECTED]> wrote:
> On 12/1/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > The "goals of humanity", like all other species, was determined by
> > evolution.
> > It
This seems rather circular and ill-defined.
- samantha
Yeah I don't really know what I'm talking about at all.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
On 12/1/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
--- Hank Conn <[EMAIL PROTECTED]> wrote:
> The further the actual target goal state of that particular AI is away
from
> the actual target goal state of humanity, the worse.
>
> The goal of ... humanity... is that the A
On 11/30/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Hank Conn wrote:
[snip...]
> > I'm not asserting any specific AI design. And I don't see how
> > a motivational system based on "large numbers of diffuse
constrains"
> &
On 11/19/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Hank Conn wrote:
> > Yes, you are exactly right. The question is which of my
> assumption are
> > unrealistic?
>
> Well, you could start with the idea that the AI has "... a strong
goal
On 11/17/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Hank Conn wrote:
> On 11/17/06, *Richard Loosemore* <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>> wrote:
>
> Hank Conn wrote:
> > Here are some of my attempts at explaining RSI...
>
On 11/17/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Hank Conn wrote:
> Here are some of my attempts at explaining RSI...
>
> (1)
> As a given instance of intelligence, as defined as an algorithm of an
> agent capable of achieving complex goals in complex environme
On 11/16/06, Russell Wallace <[EMAIL PROTECTED]> wrote:
On 11/16/06, Hank Conn <[EMAIL PROTECTED]> wrote:
> How fast could RSI plausibly happen? Is RSI inevitable / how soon will
> it be? How do we truly maximize the benefit to humanity?
>
The concept is unfortunatel
Here are some of my attempts at explaining RSI...
(1)
As a given instance of intelligence, as defined as an algorithm of an agent
capable of achieving complex goals in complex environments, approaches the
theoretical limits of efficiency for this class of algorithms, intelligence
approaches infin
"IBM's system ["high thermal conductivity interface technology"], while not yet ready for commercial production, is reportedly so efficient that officials expect it will double cooling efficiency."
http://msnbc.msn.com/id/15484274/
Probably being hyped more than its actual performance, but thi
"For an AGI it is very important that a motivational system be stable. The AGI should not be able to reprogram it."
I believe these are two completely different things. You can never assume an AGI will be unable to reprogram its goal system- while you can be virtually certain an AGI will never cha
16 matches
Mail list logo