Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-21 Thread Charles D Hixson

Joel Pitt wrote:

On 12/21/06, Philip Goetz <[EMAIL PROTECTED]> wrote:

That in itself is quite bad.  But what proves to me that Gould had no
interest in the scientific merits of the book is that, if he had, he
could at any time during those months have walked down one flight of
stairs and down a hall to E. O. Wilson's office, and asked him about
it.  He never did.  He never even told him they were meeting each week
to condemn it.

This one act, in my mind, is quite damning to Gould.


Definitely. I strongly dislike academics that behave like that.

Have open communication between individuals and groups instead of
running around stabbing each other's theories in the back. It just
common courtesy. Unless of course they slept with your wife or
something, in which case such behaviour could possibly be excused
(even if it is scientifically/rationally the wrong way to go, we're
still slave to our emotions).

You might check into the history of Russel's Principia Mathematica.  
Such activities are unpleasant, but have long been a part of the 
scientific community's politics.  (I'd be more explicit, but I'm not 
totally sure of the name of the mathematician who knew for long before 
Russel's publication that the work was flawed in it's basic principles, 
and don't want to slander a named individual out of carelessness.  [And 
I could be mis-remembering the details...it's the kind of activity I 
generally ignore.])


That someone is a politician and manipulative does not mean that they 
aren't a good scientist...or we'd have very few good scientists.  If you 
aren't a politician, you can't rise in a bureaucracy.  Merit doesn't 
suffice.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-20 Thread Joel Pitt

On 12/21/06, Philip Goetz <[EMAIL PROTECTED]> wrote:

That in itself is quite bad.  But what proves to me that Gould had no
interest in the scientific merits of the book is that, if he had, he
could at any time during those months have walked down one flight of
stairs and down a hall to E. O. Wilson's office, and asked him about
it.  He never did.  He never even told him they were meeting each week
to condemn it.

This one act, in my mind, is quite damning to Gould.


Definitely. I strongly dislike academics that behave like that.

Have open communication between individuals and groups instead of
running around stabbing each other's theories in the back. It just
common courtesy. Unless of course they slept with your wife or
something, in which case such behaviour could possibly be excused
(even if it is scientifically/rationally the wrong way to go, we're
still slave to our emotions).

--
-Joel

"Unless you try to do something beyond what you have mastered, you
will never grow." -C.R. Lawton

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-20 Thread Philip Goetz

On 12/13/06, Charles D Hixson <[EMAIL PROTECTED]> wrote:


To speak of evolution as being "forward" or "backward" is to impose upon
it our own preconceptions of the direction in which it *should* be
changing.  This seems...misguided.


Evolution includes random mutation, and natural selection.  It is
meaningful to talk of the relative strength of these effects.  If you
write a genetic algorithm, for example, you must set the mutation
rate, and the selection pressure.  Everyone who has ever run a GA has
had to make choices about that.

If you set the mutation rate too high relative to selection pressure,
you get devolution.  It is wrong to call it "evolution in a different
direction that does not appeal to your subjective values".


Stephen J. Gould may well have been more of a populizer than a research
scientist, but I feel that your criticisms of his presentations are
unwarranted and made either in ignorance or malice.


Gould did in fact do significant research as well as produce a good
textbook.  But I've read many of his books, and I believe they are all
slanted towards his social agenda, which is a strong form of
relativism.

When E. O. Wilson published Sociobiology, Stephen J. Gould helped lead
a book discussion group that took several months to study the book,
and write a damning response to it.  The response did not criticize
the science, but essentially said that it was socially irresponsible
to ask the sorts of questions that Wilson asked.

That in itself is quite bad.  But what proves to me that Gould had no
interest in the scientific merits of the book is that, if he had, he
could at any time during those months have walked down one flight of
stairs and down a hall to E. O. Wilson's office, and asked him about
it.  He never did.  He never even told him they were meeting each week
to condemn it.

This one act, in my mind, is quite damning to Gould.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-19 Thread Joel Pitt

On 12/14/06, Charles D Hixson <[EMAIL PROTECTED]> wrote:

To speak of evolution as being "forward" or "backward" is to impose upon
it our own preconceptions of the direction in which it *should* be
changing.  This seems...misguided.


IMHO Evolution tends to increase extropy and self-organisation. Thus
there is direction to evolution. There is no direction to the random
mutations, or direction to the changes within an individual - only to
the system of evolving agents.

--
-Joel

"Unless you try to do something beyond what you have mastered, you
will never grow." -C.R. Lawton

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-15 Thread Philip Goetz

On 12/13/06, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote:

Nope.  I think, for example, that the process of evolution is universal -- it
shows the key feature of exponential learning growth, but with a very slow
clock. So there're other models besides a mammalian brain.

My mental model is to ask of a given person, suppose you had a community of
10,000 people just like him and they lived 10,000 years -- starting from
high-school math, could they prove Fermat's Last Theorem? This is
significantly more than the effort by all the mathemeticians who ultimately
did build the solution.


Yes, but if we lived in a society where the average IQ was 160,
Fermat's Last Theorem wouldn't seem like as big a deal to you, and you
would probably use some other, higher gold standard, to define
"universal".

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-13 Thread J. Storrs Hall, PhD.
Nope.  I think, for example, that the process of evolution is universal -- it 
shows the key feature of exponential learning growth, but with a very slow 
clock. So there're other models besides a mammalian brain.

My mental model is to ask of a given person, suppose you had a community of 
10,000 people just like him and they lived 10,000 years -- starting from 
high-school math, could they prove Fermat's Last Theorem? This is 
significantly more than the effort by all the mathemeticians who ultimately 
did build the solution. 

In the environment of ancestral adaptation, there was a lot more value to 
learning what society generally knew and doing it competently, than abstract 
symbolic innovation. Indeed that's still very largely true. Genetically, more 
than one percent of innovators would have been a waste.

Josh



On Wednesday 13 December 2006 17:44, Philip Goetz wrote:
> On 12/8/06, J. Storrs Hall <[EMAIL PROTECTED]> wrote:
> > If I had to guess, I would say the boundary is at about IQ 140, so the
> > top 1% of humanity is universal -- but that's pure speculation; it may
> > well be that no human is universal, because of inductive bias, and it
> > takes a community to search the space of biasses and thus be universal.
>
> If you come up with an idea of "universal", based on your experience
> with what people have done, and you look at every living intelligent
> thing in nature, and then conclude that only the very top 1% you
> observe of the very smartest species you observe is universal --
> doesn't it seem likely that, if you had a sample of a smarter species
> to observe, with an average IQ of 140 and some going over 200, you
> would observe them doing some even more impressive things, and
> conclude that "universal" applied only to the top 1% of them?
>
> And if you had experience with machines with IQs ranging from
> 1000-2000... you would conclude that "universal" was exemplified only
> by the ones above 1900.
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-13 Thread Philip Goetz

On 12/8/06, J. Storrs Hall <[EMAIL PROTECTED]> wrote:


If I had to guess, I would say the boundary is at about IQ 140, so the top 1%
of humanity is universal -- but that's pure speculation; it may well be that
no human is universal, because of inductive bias, and it takes a community to
search the space of biasses and thus be universal.


If you come up with an idea of "universal", based on your experience
with what people have done, and you look at every living intelligent
thing in nature, and then conclude that only the very top 1% you
observe of the very smartest species you observe is universal --
doesn't it seem likely that, if you had a sample of a smarter species
to observe, with an average IQ of 140 and some going over 200, you
would observe them doing some even more impressive things, and
conclude that "universal" applied only to the top 1% of them?

And if you had experience with machines with IQs ranging from
1000-2000... you would conclude that "universal" was exemplified only
by the ones above 1900.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-13 Thread Charles D Hixson

Philip Goetz wrote:

...
The "disagreement" here is a side-effect of postmodern thought.
Matt is using "evolution" as the opposite of "devolution", whereas
Eric seems to be using it as meaning "change, of any kind, via natural
selection".

We have difficulty because people with political agendas - notably
Stephen J. Gould - have brainwashed us into believing that we must
never speak of evolution being "forward" or "backward", and that
change in any direction is equally valuable.  With such a viewpoint,
though, it is impossible to express concern about the rising incidence
of allergies, genetic diseases, etc.

...
To speak of evolution as being "forward" or "backward" is to impose upon 
it our own preconceptions of the direction in which it *should* be 
changing.  This seems...misguided.


To claim that because all changes in the gene pool are evolution, that 
therefore they are all equally valuable is to conflate two (orthogonal?) 
assertions.  Value is inherently subjective to the entity doing the 
evaluation.  Evolution, interpreted as statistical changes in the gene 
pool, in inherently objective (though, of course, measurements of it may 
well be biased).


Stephen J. Gould may well have been more of a populizer than a research 
scientist, but I feel that your criticisms of his presentations are 
unwarranted and made either in ignorance or malice.  This is not a 
strong belief, and were evidence presented I would be willing to change 
it, but I've seen such assertions made before with equal lack of 
evidential backing, and find them distasteful.


That Stephen J. Gould had some theories of how evolution works that are 
not universally accepted by those skilled in the field does not warrant 
your comments.  Many who are skilled in the field find them either 
intriguing or reasonable.  Some find them the only reasonable proposal.  
I can't speak for "most", as I am not a professional evolutionary 
biologist, and don't know that many folk who are, but it would not 
surprise me to find that most evolutionary biologists found his 
arguments reasonable and unexceptional, if not convincing.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-13 Thread Philip Goetz

On 12/5/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:


--- Eric Baum <[EMAIL PROTECTED]> wrote:
> Matt> We have slowed evolution through medical advances, birth control
> Matt> and genetic engineering, but I don't think we have stopped it
> Matt> completely yet.
>
> I don't know what reason there is to think we have slowed
> evolution, rather than speeded it up.
>
> I would hazard to guess, for example, that since the discovery of
> birth control, we have been selecting very rapidly for people who
> choose to have more babies. In fact, I suspect this is one reason
> why the US (which became rich before most of the rest of the world)
> has a higher birth rate than Europe.


...


The main effect of medical advances is to keep children alive who would
otherwise have died from genetic weaknesses, allowing these weaknesses to be
propagated.


The "disagreement" here is a side-effect of postmodern thought.
Matt is using "evolution" as the opposite of "devolution", whereas
Eric seems to be using it as meaning "change, of any kind, via natural
selection".

We have difficulty because people with political agendas - notably
Stephen J. Gould - have brainwashed us into believing that we must
never speak of evolution being "forward" or "backward", and that
change in any direction is equally valuable.  With such a viewpoint,
though, it is impossible to express concern about the rising incidence
of allergies, genetic diseases, etc.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-08 Thread James Ratcliff

>IMO, AGIs plausibly could actually transfer full, complete skills including 
>whatever learning is part of it. It's all computer bits sitting somewhere, and 
>they should be transferable and then integrable on the other end.

Unfortunatly we need a new way to transfer that knowledge, as presumably while 
it is easily accessible inside an AGI, getting it out may be an issue, as most 
architecture have it in some form of networked status that interacts with 
virtually every other part of the AGI system.
  So short of copying over another AGI, this type of easy learning skills 
transfer may not be feasible.
  It may be able to generalize and 'teach' the other AGI via multip repititions 
over and over, in a much quicker and more dedicated fashion than with humans 
though.

James Ratcliff

Brian Atkins <[EMAIL PROTECTED]> wrote: J. Storrs Hall wrote:
> On Monday 04 December 2006 07:55, Brian Atkins wrote:
> 
>> Also, what is really the difference between an Einstein/Feynman brain, and 
>> someone with an 80 IQ?
> 
> I think there's very likely a significant structural difference and the IQ80 
> one is *not* learning universal in my sense.  

So there is some group of humans you would say don't pass your learning 
universal test. Now, of the group that does pass, how big is that group 
roughly? 
The majority of humans? (IQ 100 and above) Whatever the size of that group, do 
you claim that any of these learning universalists would be capable of coming 
up 
with Einstein-class (or take your pick) ideas if they had been in his shoes 
during his lifetime? In other words, if they had access to his experiences, 
education, etc.

I would say no. I'm not saying that Einstein is the sole human who could have 
come up with his ideas, but I'm also saying that it's unlikely that someone 
with 
an IQ of 110 would be able to do so even if given every help. I would say there 
are yet more differences in human minds beyond your learning universal idea 
which separate us, which make the difference for example between a 110 IQ and 
140.

> 
>> For instance, let's say I want to design a new microprocessor. As part of 
> that 
>> process I may need to design a multitude of different circuits, test them, 
> and 
>> then integrate them. To humans, this is not a task that can run on 
> autopilot
>> What if though I find doing this job kinda boring after a while and wish I 
> could 
>> split off a largish chunk of my cognitive resources to chew away on it at a 
>> somewhat slower speed unconsciously in the background and get the work done 
>> while the conscious part of my mind surfs the web? Humans can't do this, but 
> an 
>> AGI likely could.
> 
> At any given level, a mind will have some tasks that require all its 
> attention 
> and resources. If the task is simple enough that it can be done with a 
> fraction of the resources, (e.g. driving) we learn to turn it into a habit / 
> skill and to it more or less subconsciously. An AI might do that faster but 
> we're assuming it could lots of things faster. On the other hand, it would 
> still have to pay attention to tasks that require all its resources.

This isn't completely addressing my particular scenario, where let's say we 
have 
a roughly human level AGI, it has to work on a semi-repetitive design task, the 
kind of thing a human is forced to stare a monitor at yet doesn't take their 
full absolute maximum brainpower. The AGI should theoretically be able to 
divide 
its resources in such a way that the design task can be done unconsciously in 
the background, while it can use what resources remain to do other stuff at the 
same time.

The point being although this task takes only part of the human's max 
abilities, 
by their nature they can't split it off, automate it, or otherwise escape 
letting some brain cycles go to "waste". The human mind is too monolithic in 
such cases which go beyond simple habits, yet are below max output.

> 
>> Again, aiming off the bullseye. Attempting to explain to someone about the 
>> particular clouds you saw yesterday, the particular colors of the sunrise, 
> etc. 
>> you can of course not transfer the full information to them. A second 
> example 
>> would be with skills, which could easily be shared among AGIs but cannot be 
>> shared between humans.
> 
> Actually the ability to copy skills is the key item, imho, that separates 
> humans from the previous smart animals. It made us a memetic substrate. In 
> terms of the animal kingdom, we do it very, very well.  I'm sure that AIs 
> will be able to as well, but probably it's not quite as simple as simply 
> copying a subroutine library from one computer to another.
> 
> The reason is learning. If you keep the simple-copy semantics, no learning 
> happens when skills are transferred. In humans, a learning step is forced, 
> contributing to the memetic evolution of the skill. 

IMO, AGIs plausibly could actually transfer full, complete skills including 
whatever learning is part of it.

Re: [agi] RSI - What is it and how fast?

2006-12-08 Thread Hank Conn

"I think we're within a decade of that tipping point already."

What are some things you are anticipating to happen within the next decade?

-hank

On 12/8/06, J. Storrs Hall <[EMAIL PROTECTED]> wrote:


On Thursday 07 December 2006 05:29, Brian Atkins wrote:

> The point being although this task takes only part of the human's max
abilities,
> by their nature they can't split it off, automate it, or otherwise
escape
> letting some brain cycles go to "waste". The human mind is too
monolithic in
> such cases which go beyond simple habits, yet are below max output.

I wouldn't have used the word monolithic, but rather "parallel with
special-purpose modules," which is certainly true. It's rather like the
human
body in that regard -- most tasks we don't manage to use *all* of our
muscles
at full capacity at the same time :-)

I don't see why you expect anything all that different for an AI, tho.
Purely
serial purely general purpose has its uses, indeed, but you've already
given
away a couple orders of magnitude performance. Build an architecture with
substantial parallelism and/or special-purpose hardware and you'll be in
the
same boat as the human mind, "not all used at full effectiveness all the
time," even though the heterogeneous architecture is typically faster than
the purely serial purely GP one.

> IMO, AGIs plausibly could actually transfer full, complete skills
including
> whatever learning is part of it. It's all computer bits sitting
somewhere,
and
> they should be transferable and then integrable on the other end.

The reason the Soviet Union imploded is that they designed an economic
system
without what they thought were the inefficiencies of the market (and it
was
much more efficient by their metrics), but they missed the main function
of
the market, which is a learning mechanism that adapts the patterns of
production to the needs of the people.  I claim a lot of the efficiency
people are expecting in AI as opposed to human thought is similarly an
incomplete picture.

> While I sort of agree with the idea of trying to get out in front of
others
who
> may not be playing nice with their AGI designs (AGI arms race anyone?),
I
don't
> see how this gets back to answering the original discussion point you
snipped:
> why "SuperAI takes over" isn't possible at all. Yes, you've got your
particular
> ideas on how to keep your one single AGI nice and friendly. But as you
point
out
> above, there could be one that arrives before yours, or in your "AGI
society"
> story there will be a lot of competing AGIs. Why again can't one of them
> silently root a bunch of boxes and rapidly outgrow the others?

That's two separate questions. To the first I would say that it's very
likely
that by 2050 AIs will in fact be running everything, but they'll be doing
it
better than the humans who are doing it now. If you ask me why I think the
AIs would do any better than the Republicrats, I'll just say I think
they'll
be smarter and they'll be under closer scrutiny, at least at first.

Second question is whether one single AI could get the jump on all the
rest.
Again I would use the "mind children" argument -- the AIs that will have
the
most capability the soonest are the ones that are in closest collaboration
with human researchers with lots of smarts and horsepower, not the product
of
some lone script kiddie using botnets. As soon as they start to show
economic
promise, every tom, dick, and harry will get into the act. I think we're
within a decade of that tipping point already.

--Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-08 Thread J. Storrs Hall
On Thursday 07 December 2006 05:29, Brian Atkins wrote:

> The point being although this task takes only part of the human's max 
abilities, 
> by their nature they can't split it off, automate it, or otherwise escape 
> letting some brain cycles go to "waste". The human mind is too monolithic in 
> such cases which go beyond simple habits, yet are below max output.

I wouldn't have used the word monolithic, but rather "parallel with 
special-purpose modules," which is certainly true. It's rather like the human 
body in that regard -- most tasks we don't manage to use *all* of our muscles 
at full capacity at the same time :-)

I don't see why you expect anything all that different for an AI, tho. Purely 
serial purely general purpose has its uses, indeed, but you've already given 
away a couple orders of magnitude performance. Build an architecture with 
substantial parallelism and/or special-purpose hardware and you'll be in the 
same boat as the human mind, "not all used at full effectiveness all the 
time," even though the heterogeneous architecture is typically faster than 
the purely serial purely GP one.

> IMO, AGIs plausibly could actually transfer full, complete skills including 
> whatever learning is part of it. It's all computer bits sitting somewhere, 
and 
> they should be transferable and then integrable on the other end.

The reason the Soviet Union imploded is that they designed an economic system 
without what they thought were the inefficiencies of the market (and it was 
much more efficient by their metrics), but they missed the main function of 
the market, which is a learning mechanism that adapts the patterns of 
production to the needs of the people.  I claim a lot of the efficiency 
people are expecting in AI as opposed to human thought is similarly an 
incomplete picture.

> While I sort of agree with the idea of trying to get out in front of others 
who 
> may not be playing nice with their AGI designs (AGI arms race anyone?), I 
don't 
> see how this gets back to answering the original discussion point you 
snipped: 
> why "SuperAI takes over" isn't possible at all. Yes, you've got your 
particular 
> ideas on how to keep your one single AGI nice and friendly. But as you point 
out 
> above, there could be one that arrives before yours, or in your "AGI 
society" 
> story there will be a lot of competing AGIs. Why again can't one of them 
> silently root a bunch of boxes and rapidly outgrow the others?

That's two separate questions. To the first I would say that it's very likely 
that by 2050 AIs will in fact be running everything, but they'll be doing it 
better than the humans who are doing it now. If you ask me why I think the 
AIs would do any better than the Republicrats, I'll just say I think they'll 
be smarter and they'll be under closer scrutiny, at least at first.

Second question is whether one single AI could get the jump on all the rest. 
Again I would use the "mind children" argument -- the AIs that will have the 
most capability the soonest are the ones that are in closest collaboration 
with human researchers with lots of smarts and horsepower, not the product of 
some lone script kiddie using botnets. As soon as they start to show economic 
promise, every tom, dick, and harry will get into the act. I think we're 
within a decade of that tipping point already.

--Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-08 Thread J. Storrs Hall
Ah, perhaps you agree with Richard Westfall:

" The more I have studied him, the more Newton has receded from me He has 
become for me wholly other, one of the tiny handful of geniuses who have 
shaped the categories of the human intellect, a man not reducible to the 
criteria by which we comprehend our fellow human beings"

If I had to guess, I would say the boundary is at about IQ 140, so the top 1% 
of humanity is universal -- but that's pure speculation; it may well be that 
no human is universal, because of inductive bias, and it takes a community to 
search the space of biasses and thus be universal.

--Josh


On Thursday 07 December 2006 05:36, Brian Atkins wrote:
> Huh that doesn't look right when I received it back. Here's a rewritten 
sentence:
> 
> Whatever the size of that group, do you claim that _all_ of these learning 
> universalists would be capable of coming up with Einstein-class (or take 
your 
> pick) ideas if they had been in his shoes during his lifetime?
> -- 
> Brian Atkins
> Singularity Institute for Artificial Intelligence
> http://www.singinst.org/
> 
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?list_id=303
> 
> 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-07 Thread Brian Atkins

sam kayley wrote:


'integrable on the other end'.is a rather large issue to shove under the 
carpet in five words ;)


Indeed :-)



For two AIs recently forked from a common parent, probably, but for AIs 
with different 'life experiences' and resulting different conceptual 
structures, why should a severed mindpart be meaningful without 
translation into a common representation, i.e. a language?


If the language could describe things that are not introspectable in 
humans, it would help, but there would still be a process of translation 
which I see no reason to think would be anything like as fast and 
lossless as copying a file.


And as Hall points out, even if direct transfer is possible, it may 
often be better not to do so to make improvement of the skill possible.




Well, one relatively easy way to get at least part way around this would be for 
the two AGIs to define beforehand a common format for the sharing of skill data. 
This might allow for defining lots of things such as labeling inputs/outputs, 
what formats of input/output this skill module uses, etc. If then one AGI 
exported the skill to this format, and the other wrote an import function then I 
think this should be plausible. Or if an import function is too hard for some 
reason it could run the skill format on a skill format virtual machine and just 
feed it the right inputs and collect and use the outputs.


Would such a chunk of bits also be able to be further learned/improved and not 
just used as an external tool by the new AGI? I'm not sure, but I would lean 
towards saying yes if the import code used by the 2nd AGI takes the skill format 
bits and uses those to generate an integrated mind module of its own special 
internal format.


I think getting too far into these technical details is going beyond my own 
skills, so I should stop here and just retreat to my original idea: because the 
bits for a skill are there in the first AGI, and because two AGIs can transmit 
lossless data bits directly between themselves quickly (compared to humans), 
this could create at least hypothetically a "direct" skill sharing functionality 
which humans do not have.


We all have a lot of hypotheses at this point in history, I am just trying to 
err towards caution rather than ones that could be dangerous if proven wrong.

--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-07 Thread sam kayley



Brian Atkins wrote:

J. Storrs Hall wrote:


Actually the ability to copy skills is the key item, imho, that 
separates humans from the previous smart animals. It made us a 
memetic substrate. In terms of the animal kingdom, we do it very, 
very well.  I'm sure that AIs will be able to as well, but probably 
it's not quite as simple as simply copying a subroutine library from 
one computer to another.


The reason is learning. If you keep the simple-copy semantics, no 
learning happens when skills are transferred. In humans, a learning 
step is forced, contributing to the memetic evolution of the skill. 
IMO, AGIs plausibly could actually transfer full, complete skills 
including whatever learning is part of it. It's all computer bits 
sitting somewhere, and they should be transferable and then integrable 
on the other end.


If so, this is far more powerful, new, and distinct than a newbie 
tennis player watching a pro, and trying to learn how to serve that 
well over a period of years, or a math student trying to learn 
calculus. Even aside from the dramatic time scale difference, humans 
can never transfer their skills fully exactly in a lossless-esque 
fashion.
'integrable on the other end'.is a rather large issue to shove under the 
carpet in five words ;)


For two AIs recently forked from a common parent, probably, but for AIs 
with different 'life experiences' and resulting different conceptual 
structures, why should a severed mindpart be meaningful without 
translation into a common representation, i.e. a language?


If the language could describe things that are not introspectable in 
humans, it would help, but there would still be a process of translation 
which I see no reason to think would be anything like as fast and 
lossless as copying a file.


And as Hall points out, even if direct transfer is possible, it may 
often be better not to do so to make improvement of the skill possible.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-06 Thread Brian Atkins

Huh that doesn't look right when I received it back. Here's a rewritten 
sentence:

Whatever the size of that group, do you claim that _all_ of these learning 
universalists would be capable of coming up with Einstein-class (or take your 
pick) ideas if they had been in his shoes during his lifetime?

--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-06 Thread Brian Atkins

Small correction:

Brian Atkins wrote:


So there is some group of humans you would say don't pass your learning 
universal test. Now, of the group that does pass, how big is that group 
roughly? The majority of humans? (IQ 100 and above) Whatever the size of 
that group, do you claim that any of these learning universalists would 

^^^
Should be "all" I suppose.
--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-06 Thread Brian Atkins

J. Storrs Hall wrote:

On Monday 04 December 2006 07:55, Brian Atkins wrote:

Also, what is really the difference between an Einstein/Feynman brain, and 
someone with an 80 IQ?


I think there's very likely a significant structural difference and the IQ80 
one is *not* learning universal in my sense.  


So there is some group of humans you would say don't pass your learning 
universal test. Now, of the group that does pass, how big is that group roughly? 
The majority of humans? (IQ 100 and above) Whatever the size of that group, do 
you claim that any of these learning universalists would be capable of coming up 
with Einstein-class (or take your pick) ideas if they had been in his shoes 
during his lifetime? In other words, if they had access to his experiences, 
education, etc.


I would say no. I'm not saying that Einstein is the sole human who could have 
come up with his ideas, but I'm also saying that it's unlikely that someone with 
an IQ of 110 would be able to do so even if given every help. I would say there 
are yet more differences in human minds beyond your learning universal idea 
which separate us, which make the difference for example between a 110 IQ and 140.




For instance, let's say I want to design a new microprocessor. As part of 
that 
process I may need to design a multitude of different circuits, test them, 
and 
then integrate them. To humans, this is not a task that can run on 

autopilot
What if though I find doing this job kinda boring after a while and wish I 
could 
split off a largish chunk of my cognitive resources to chew away on it at a 
somewhat slower speed unconsciously in the background and get the work done 
while the conscious part of my mind surfs the web? Humans can't do this, but 
an 

AGI likely could.


At any given level, a mind will have some tasks that require all its attention 
and resources. If the task is simple enough that it can be done with a 
fraction of the resources, (e.g. driving) we learn to turn it into a habit / 
skill and to it more or less subconsciously. An AI might do that faster but 
we're assuming it could lots of things faster. On the other hand, it would 
still have to pay attention to tasks that require all its resources.


This isn't completely addressing my particular scenario, where let's say we have 
a roughly human level AGI, it has to work on a semi-repetitive design task, the 
kind of thing a human is forced to stare a monitor at yet doesn't take their 
full absolute maximum brainpower. The AGI should theoretically be able to divide 
its resources in such a way that the design task can be done unconsciously in 
the background, while it can use what resources remain to do other stuff at the 
same time.


The point being although this task takes only part of the human's max abilities, 
by their nature they can't split it off, automate it, or otherwise escape 
letting some brain cycles go to "waste". The human mind is too monolithic in 
such cases which go beyond simple habits, yet are below max output.




Again, aiming off the bullseye. Attempting to explain to someone about the 
particular clouds you saw yesterday, the particular colors of the sunrise, 
etc. 
you can of course not transfer the full information to them. A second 
example 
would be with skills, which could easily be shared among AGIs but cannot be 
shared between humans.


Actually the ability to copy skills is the key item, imho, that separates 
humans from the previous smart animals. It made us a memetic substrate. In 
terms of the animal kingdom, we do it very, very well.  I'm sure that AIs 
will be able to as well, but probably it's not quite as simple as simply 
copying a subroutine library from one computer to another.


The reason is learning. If you keep the simple-copy semantics, no learning 
happens when skills are transferred. In humans, a learning step is forced, 
contributing to the memetic evolution of the skill. 


IMO, AGIs plausibly could actually transfer full, complete skills including 
whatever learning is part of it. It's all computer bits sitting somewhere, and 
they should be transferable and then integrable on the other end.


If so, this is far more powerful, new, and distinct than a newbie tennis player 
watching a pro, and trying to learn how to serve that well over a period of 
years, or a math student trying to learn calculus. Even aside from the dramatic 
time scale difference, humans can never transfer their skills fully exactly in a 
lossless-esque fashion.




Currently all I see is a very large and rapidly growing very insecure 
network of 
rapidly improving computers out there ripe for the picking by the first 
smart 
enough AGI. 


A major architectural feature of both the brain and existing supercomputers is 
that the majority of the structure/cost is in the communications fabric, not 
the processors themselves. A botnet using residential internet connections 
would be immensely hobbled. (It would be different, of course

Re: [agi] RSI - What is it and how fast?

2006-12-06 Thread J. Storrs Hall
I'm on the road, so I'll have to give short shrift to this, but I'll try to 
hit a few high points:

On Monday 04 December 2006 07:55, Brian Atkins wrote:

> Putting aside the speed differential which you accept, but dismiss as 
important 
> for RSI, isn't there a bigger issue you're skipping regarding the other 
> differences between an Opteron-level PC and an 8080-era box? For example, 
there 
> are large differences in the addressable memory amounts. ... Does 
> it multiply with the speed differential?

I don't think the speed differential is unimportant -- but you need to 
separate it out to make certain kinds of theoretical analysis, same as you 
need to to consider Turing completeness in a computer.

ANY computer has a finite amount of memory and is thus a FSA and not a Turing 
machine. To think Turing-completeness you have to assume that it will read 
and write to an (unbounded) outboard memory. Thus the speed difference 
between, say, an 8080 and an Opteron will be a constant factor.

> Also, what is really the difference between an Einstein/Feynman brain, and 
> someone with an 80 IQ?

I think there's very likely a significant structural difference and the IQ80 
one is *not* learning universal in my sense.  

> For instance, let's say I want to design a new microprocessor. As part of 
that 
> process I may need to design a multitude of different circuits, test them, 
and 
> then integrate them. To humans, this is not a task that can run on 
autopilot
> What if though I find doing this job kinda boring after a while and wish I 
could 
> split off a largish chunk of my cognitive resources to chew away on it at a 
> somewhat slower speed unconsciously in the background and get the work done 
> while the conscious part of my mind surfs the web? Humans can't do this, but 
an 
> AGI likely could.

At any given level, a mind will have some tasks that require all its attention 
and resources. If the task is simple enough that it can be done with a 
fraction of the resources, (e.g. driving) we learn to turn it into a habit / 
skill and to it more or less subconsciously. An AI might do that faster but 
we're assuming it could lots of things faster. On the other hand, it would 
still have to pay attention to tasks that require all its resources.

> Again, aiming off the bullseye. Attempting to explain to someone about the 
> particular clouds you saw yesterday, the particular colors of the sunrise, 
etc. 
> you can of course not transfer the full information to them. A second 
example 
> would be with skills, which could easily be shared among AGIs but cannot be 
> shared between humans.

Actually the ability to copy skills is the key item, imho, that separates 
humans from the previous smart animals. It made us a memetic substrate. In 
terms of the animal kingdom, we do it very, very well.  I'm sure that AIs 
will be able to as well, but probably it's not quite as simple as simply 
copying a subroutine library from one computer to another.

The reason is learning. If you keep the simple-copy semantics, no learning 
happens when skills are transferred. In humans, a learning step is forced, 
contributing to the memetic evolution of the skill. 

> > Compare that with chess, where the learned chess module of a human is 
about 
> > equal to a supercomputer with specialized hardware, but where the problem 
is 
> > simple enough that we know how to program the supercomputer. 
> 
> I'm not really sure what this segment of text is getting at. Are you 
claiming 
> that humans can always laboriously create mind modules that will allow them 
to 
> perform feats equal to an AGI?

No, of course not, when the AI is running on vastly superior hardware. All I 
was saying is that we seem to be able to compile our skills to a form that is 
about as efficient on the hardware we do have, as the innate skills are. (mod 
cases where the actual neurons are specialized.)

> Currently all I see is a very large and rapidly growing very insecure 
network of 
> rapidly improving computers out there ripe for the picking by the first 
smart 
> enough AGI. 

A major architectural feature of both the brain and existing supercomputers is 
that the majority of the structure/cost is in the communications fabric, not 
the processors themselves. A botnet using residential internet connections 
would be immensely hobbled. (It would be different, of course, if it took 
over someone's Blue Gene...)

> What comes after that I think is unprovable currently. We do know of  
> course that relatively small tweaks in brain design led from apes to us, a 
> rather large difference in capabilities that did not apparently require too 
many 
> more atoms in the skull. Your handwaving regarding universality aside, this 
also 
> worries me.

There we will just have to disgree. I see one quantum jump in intelligence 
from Neanderthals to us. Existing AI programs are on the Neanderthal side of 
it. We AGIers are hoping to figure out what the Good Trick is an

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-05 Thread Matt Mahoney

--- Eric Baum <[EMAIL PROTECTED]> wrote:

> 
> Matt> --- Hank Conn <[EMAIL PROTECTED]> wrote:
> 
> >> On 12/1/06, Matt Mahoney <[EMAIL PROTECTED]> wrote: > The "goals
> >> of humanity", like all other species, was determined by >
> >> evolution.  > It is to propagate the species.
> >> 
> >> 
> >> That's not the goal of humanity. That's the goal of the evolution
> >> of humanity, which has been defunct for a while.
> 
> Matt> We have slowed evolution through medical advances, birth control
> Matt> and genetic engineering, but I don't think we have stopped it
> Matt> completely yet.
> 
> I don't know what reason there is to think we have slowed
> evolution, rather than speeded it up.
> 
> I would hazard to guess, for example, that since the discovery of 
> birth control, we have been selecting very rapidly for people who 
> choose to have more babies. In fact, I suspect this is one reason
> why the US (which became rich before most of the rest of the world)
> has a higher birth rate than Europe.

Yes, but actually most of the population increase in the U.S. is from
immigration.  Population is growing the fastest in the poorest countries,
especially Africa.

> Likewise, I expect medical advances in childbirth etc are selecting
> very rapidly for multiple births (which once upon a time often killed 
> off mother and child.) I expect this, rather than or in addition to
> the effects of fertility drugs, is the reason for the rise in 
> multiple births.

The main effect of medical advances is to keep children alive who would
otherwise have died from genetic weaknesses, allowing these weaknesses to be
propagated.

Genetic engineering has not yet had much effect on human evolution, as it has
in agriculture.  We have the technology to greatly speed up human evolution,
but it is suppressed for ethical reasons.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Ben Goertzel

For a baby AGI, I would force the physiological goals, yeah.

In practice, baby Novamente's only explicit goal is getting rewards
from its teacher  Its other goals, such as learning new
information, are left implicit in the action of the system's internal
cognitive processes  It's simulation world is "friendly" in the
sense that it doesn't currently need to take any specific actions in
order just to stay alive...

-- Ben

On 12/4/06, James Ratcliff <[EMAIL PROTECTED]> wrote:

Ok,
  That is a start, but you dont have a difference there between externally
required goals, and internally created goals.
  And what smallest set of external goals do you expect to give?
Would you or not force as Top Level the Physiological (per wiki page you
cited) goals from signals, presumably for a robot AGI.

What other goals are easily definable, and necessary for an AGI, and how do
we model them in such a way that they coexist with the internally created
goals.

I have worked on the rudiments of an AGI system, but am having trouble
defining its internal goal systems.

James Ratcliff


Ben Goertzel <[EMAIL PROTECTED]> wrote:
 Regarding the definition of goals and supergoals, I have made attempts at:

http://www.agiri.org/wiki/index.php/Goal

http://www.agiri.org/wiki/index.php/Supergoal

The scope of human supergoals has been moderately well articulated by
Maslow IMO:

http://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs

BTW, I have borrowed from Stan Franklin the use of the term "drive" to
denote a built-in rather than learned supergoal:

http://www.agiri.org/wiki/index.php/Drive

-- Ben G


On 12/4/06, James Ratcliff wrote:
> Ok,
> Alot has been thrown around here about "Top-Level" goals, but no real
> definition has been given, and I am confused as it seems to be covering
alot
> of ground for some people.
>
> What 'level' and what are these top level goals for humans/AGI's?
>
> It seems that "Staying Alive" is a big one, but that appears to contain
> hunger/sleep/ and most other body level needs.
>
> And how hard-wired are these goals, and how (simply) do we really
hard-wire
> them atall?
>
> Our goal of staying alive appears to be "biologically preferred" or
> something like that, but can definetly be overridden by depression /
saving
> a person in a burning building.
>
> James Ratcliff
>
>
> Ben Goertzel wrote:
> IMO, humans **can** reprogram their top-level goals, but only with
> difficulty. And this is correct: a mind needs to have a certain level
> of maturity to really reflect on its own top-level goals, so that it
> would be architecturally foolish to build a mind that involved
> revision of supergoals at the infant/child phase.
>
> However, without reprogramming our top-level goals, we humans still
> have a lot of flexibility in our ultimate orientation. This is
> because we are inconsistent systems: our top-level goals form a set of
> not-entirely-consistent objectives... so we can shift from one
> wired-in top-level goal to another, playing with the inconsistency.
> (I note that, because the logic of the human mind is probabilistically
> paraconsistent, the existence of inconsistency does not necessarily
> imply that "all things are derivable" as it would in typical predicate
> logic.)
>
> Those of us who seek to become "as logically consistent as possible,
> given the limitations of our computational infrastructure" have a
> tough quest, because the human mind/brain is not wired for
> consistency; and I suggest that this inconsistency pervades the human
> wired-in supergoal set as well...
>
> Much of the inconsistency within the human wired-in supergoal set has
> to do with time-horizons. We are wired to want things in the short
> term that contradict the things we are wired to want in the
> medium/long term; and each of our mind/brains' self-organizing
> dynamics needs to work out these evolutionarily-supplied
> contradictions on its own One route is to try to replace our
> inconsistent initial wiring with a more consistent supergoal set; the
> more common route is to oscillate chaotically from one side of the
> contradiction to the other...
>
> (Yes, I am speaking loosely here rather than entirely rigorously; but
> formalizing all this stuff would take a lot of time and space...)
>
> -- Ben F
>
>
> On 12/3/06, Matt Mahoney wrote:
> >
> > --- Mark Waser wrote:
> >
> > > > You cannot turn off hunger or pain. You cannot
> > > > control your emotions.
> > >
> > > Huh? Matt, can you really not ignore hunger or pain? Are you really
100%
> > > at the mercy of your emotions?
> >
> > Why must you argue with everything I say? Is this not a sensible
> statement?
> >
> > > > Since the synaptic weights cannot be altered by
> > > > training (classical or operant conditioning)
> > >
> > > Who says that synaptic weights cannot be altered? And there's endless
> > > irrefutable evidence that the sum of synaptic weights is certainly
> > > constantly altering by the directed die-off of neurons.
> >
> > But not b

Re: Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread James Ratcliff
Ok,
  That is a start, but you dont have a difference there between externally 
required goals, and internally created goals.
  And what smallest set of external goals do you expect to give?
Would you or not force as Top Level the Physiological (per wiki page you cited) 
goals from signals, presumably for a robot AGI.

What other goals are easily definable, and necessary for an AGI, and how do we 
model them in such a way that they coexist with the internally created goals.

I have worked on the rudiments of an AGI system, but am having trouble defining 
its internal goal systems.

James Ratcliff


Ben Goertzel <[EMAIL PROTECTED]> wrote: Regarding the definition of goals and 
supergoals, I have made attempts at:

http://www.agiri.org/wiki/index.php/Goal

http://www.agiri.org/wiki/index.php/Supergoal

The scope of human supergoals has been moderately well articulated by
Maslow IMO:

http://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs

BTW, I have borrowed from Stan Franklin the use of the term "drive" to
denote a built-in rather than learned supergoal:

http://www.agiri.org/wiki/index.php/Drive

-- Ben G


On 12/4/06, James Ratcliff  wrote:
> Ok,
>   Alot has been thrown around here about "Top-Level" goals, but no real
> definition has been given, and I am confused as it seems to be covering alot
> of ground for some people.
>
> What 'level' and what are these top level goals for humans/AGI's?
>
> It seems that "Staying Alive" is a big one, but that appears to contain
> hunger/sleep/ and most other body level needs.
>
> And how hard-wired are these goals, and how (simply) do we really hard-wire
> them atall?
>
> Our goal of staying alive appears to be "biologically preferred" or
> something like that, but can definetly be overridden by depression / saving
> a person in a burning building.
>
> James Ratcliff
>
>
> Ben Goertzel  wrote:
>  IMO, humans **can** reprogram their top-level goals, but only with
> difficulty. And this is correct: a mind needs to have a certain level
> of maturity to really reflect on its own top-level goals, so that it
> would be architecturally foolish to build a mind that involved
> revision of supergoals at the infant/child phase.
>
> However, without reprogramming our top-level goals, we humans still
> have a lot of flexibility in our ultimate orientation. This is
> because we are inconsistent systems: our top-level goals form a set of
> not-entirely-consistent objectives... so we can shift from one
> wired-in top-level goal to another, playing with the inconsistency.
> (I note that, because the logic of the human mind is probabilistically
> paraconsistent, the existence of inconsistency does not necessarily
> imply that "all things are derivable" as it would in typical predicate
> logic.)
>
> Those of us who seek to become "as logically consistent as possible,
> given the limitations of our computational infrastructure" have a
> tough quest, because the human mind/brain is not wired for
> consistency; and I suggest that this inconsistency pervades the human
> wired-in supergoal set as well...
>
> Much of the inconsistency within the human wired-in supergoal set has
> to do with time-horizons. We are wired to want things in the short
> term that contradict the things we are wired to want in the
> medium/long term; and each of our mind/brains' self-organizing
> dynamics needs to work out these evolutionarily-supplied
> contradictions on its own One route is to try to replace our
> inconsistent initial wiring with a more consistent supergoal set; the
> more common route is to oscillate chaotically from one side of the
> contradiction to the other...
>
> (Yes, I am speaking loosely here rather than entirely rigorously; but
> formalizing all this stuff would take a lot of time and space...)
>
> -- Ben F
>
>
> On 12/3/06, Matt Mahoney wrote:
> >
> > --- Mark Waser wrote:
> >
> > > > You cannot turn off hunger or pain. You cannot
> > > > control your emotions.
> > >
> > > Huh? Matt, can you really not ignore hunger or pain? Are you really 100%
> > > at the mercy of your emotions?
> >
> > Why must you argue with everything I say? Is this not a sensible
> statement?
> >
> > > > Since the synaptic weights cannot be altered by
> > > > training (classical or operant conditioning)
> > >
> > > Who says that synaptic weights cannot be altered? And there's endless
> > > irrefutable evidence that the sum of synaptic weights is certainly
> > > constantly altering by the directed die-off of neurons.
> >
> > But not by training. You don't decide to be hungry or not, because animals
> > that could do so were removed from the gene pool.
> >
> > Is this not a sensible way to program the top level goals for an AGI?
> >
> >
> > -- Matt Mahoney, [EMAIL PROTECTED]
> >
> > -
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?list_id=303
> >
>
> -
> This list is sponsored by AGIRI: http:/

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Philip Goetz

On 12/4/06, Philip Goetz <[EMAIL PROTECTED]> wrote:

If you maintain your assertion, I'll put you in my killfile, because
we cannot communicate.


Richard Loosemore told me that I'm overreacting.  I can tell that I'm
overly emotional over this, so it might be true.  Sorry for flaming.
I am bewildered by Mark's statement, but I will look for a
less-inflammatory way of saying so next time.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Eric Baum

Matt> --- Hank Conn <[EMAIL PROTECTED]> wrote:

>> On 12/1/06, Matt Mahoney <[EMAIL PROTECTED]> wrote: > The "goals
>> of humanity", like all other species, was determined by >
>> evolution.  > It is to propagate the species.
>> 
>> 
>> That's not the goal of humanity. That's the goal of the evolution
>> of humanity, which has been defunct for a while.

Matt> We have slowed evolution through medical advances, birth control
Matt> and genetic engineering, but I don't think we have stopped it
Matt> completely yet.

I don't know what reason there is to think we have slowed
evolution, rather than speeded it up.

I would hazard to guess, for example, that since the discovery of 
birth control, we have been selecting very rapidly for people who 
choose to have more babies. In fact, I suspect this is one reason
why the US (which became rich before most of the rest of the world)
has a higher birth rate than Europe.

Likewise, I expect medical advances in childbirth etc are selecting
very rapidly for multiple births (which once upon a time often killed 
off mother and child.) I expect this, rather than or in addition to
the effects of fertility drugs, is the reason for the rise in 
multiple births.

etc.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Mark Waser

To allow that somewhere in the Himalayas, someone may be able,
with years of training, to lessen the urgency of hunger and
pain, is not sufficient evidence to assert that the proposition
that not everyone can turn them off completely is insensible.


The first sentence of the proposition was exactly "You cannot turn off 
hunger." (i.e. not that not everyone can turn them off)


My response is "I certainly can -- not permanently, but certainly so 
completely that I am not aware of it for hours at a time" and further that I 
don't believe that I am at all unusual in this regard.



- Original Message - 
From: "Philip Goetz" <[EMAIL PROTECTED]>

To: 
Sent: Monday, December 04, 2006 2:01 PM
Subject: Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is 
it and how fast?]




On 12/4/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:

> The statement, "You cannot turn off hunger or pain" is sensible.
> In fact, it's one of the few statements in the English language that
> is LITERALLY so.  Philosophically, it's more certain than
> "I think, therefore I am".
>
> If you maintain your assertion, I'll put you in my killfile, because
> we cannot communicate.

It is reported that, with sufficiently advanced training in
appropriate mind-control arts (e.g. some Oriental ones), something
accurately describable as "turning off hunger or pain" becomes
possible, from a subjective experiential perspective.


To allow that somewhere in the Himalayas, someone may be able,
with years of training, to lessen the urgency of hunger and
pain, is not sufficient evidence to assert that the proposition
that not everyone can turn them off completely is insensible.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Mark Waser
   Can you not concentrate on something else enough that you no longer feel 
hunger?  How many people do you know that have "forgotten to eat" for hours 
at a time when sucked into computer games or other activities?


   Is the same not true of pain?  Have you not heard of yogis that have 
trained their minds to concentrate strongly enough that even the most severe 
of discomfort is ignored?  How is this not "turning off pain"?  If you're 
going to argue that the nerves are still firing and further that the mere 
fact of nerves firing is relevant to the original argument, then  feel free 
to killfile me.  The original point was that humans are *NOT* absolute 
slaves to hunger and pain.


   Are you
   a) arguing that humans *ARE* absolute slaves to hunger and pain
   OR
   b) are you beating me up over a trivial sub-point that isn't 
connected back to the original argument?


- Original Message - 
From: "Philip Goetz" <[EMAIL PROTECTED]>

To: 
Sent: Monday, December 04, 2006 1:38 PM
Subject: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it 
and how fast?]




On 12/4/06, Mark Waser <[EMAIL PROTECTED]> wrote:

> Why must you argue with everything I say?  Is this not a sensible
> statement?

I don't argue with everything you say.  I only argue with things that I
believe are wrong.  And no, the statements "You cannot turn off hunger or
pain.  You cannot control your emotions are *NOT* sensible at all.


Mark -

The statement, "You cannot turn off hunger or pain" is sensible.
In fact, it's one of the few statements in the English language that
is LITERALLY so.  Philosophically, it's more certain than
"I think, therefore I am".

If you maintain your assertion, I'll put you in my killfile, because
we cannot communicate.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Charles D Hixson

Consider as a possible working definition:
A goal is the target state of a homeostatic system.  (Don't take 
homeostatic too literally, though.)


Thus, if one sets a thermostat to 70 degrees Fahrenheit, then it's goal 
is to change to room temperature to be not less than 67 degrees 
Fahrenheit.  (I'm assuming that the thermostat allows a 6 degree heat 
swing, heats until it senses 73 degrees, then turns off the heater until 
the temperature drops below 67 degrees.)


Thus, the goal is the target at which a system (or subsystem) is aimed.

Note that with this definition goals do not imply intelligence of more 
than the most very basic level.  (The thermostat senses it's environment 
and reacts to adjust it to suit it's goals, but it has no knowledge of 
what it is doing or why, or even THAT it is doing it.)  One could 
reasonably assert that the intelligence of the thermostat is, or at 
least has been, embodied outside the thermostat.  I'm not certain that 
this is useful, but it's reasonable, and if you need to tie goals into 
intelligence, then adopt that model.



James Ratcliff wrote:
Can we go back to a simpler distictintion then, what are you defining 
"Goal" as?


I see the goal term, as a higher level reasoning 'tool'
Wherin the body is constantly sending signals to our minds, but the 
goals are all created consciously or semi-conscisly.


Are you saying we should partition the "Top-Level" goals into some 
form of physical body - imposed goals and other types, or
do you think we should leave it up to a single Constroller to 
interpret the signals coming from teh body and form the goals.


In humans it looks to be the one way, but with AGI's it appears it 
would/could be another.


James

*/Charles D Hixson <[EMAIL PROTECTED]>/* wrote:

J...
Goals are important. Some are built-in, some are changeable. Habits
are also important, perhaps nearly as much so. Habits are initially
created to satisfy goals, but when goals change, or circumstances
alter,
the habits don't automatically change in synchrony.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php



Everyone is raving about the all-new Yahoo! Mail beta. 
 



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-04 Thread Philip Goetz

On 12/1/06, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote:

On Friday 01 December 2006 20:06, Philip Goetz wrote:

> Thus, I don't think my ability to follow rules written on paper to
> implement a Turing machine proves that the operations powering my
> consciousness are Turing-complete.

Actually, I think it does prove it, since your simulation of a Turing machine
would consist of conscious operations.


But the simulation of a Chinese speaker, carried out by the man in Searle's
Chinese room, consists of conscious operations.

If I simulate a Turing machine in that way, then the system consisting of
me plus a rulebook and some slips of paper is Turing-complete.
If you conclude that my conscious mind is thus Turing-complete,
you must be identifying my conscious mind with the consciousness
of the system consisting of me plus a rulebook and some slips of paper.
If you do that, then in the case of the Chinese room, you must also
identify my conscious mind with the consciousness
of the system consisting of me plus a rulebook and some slips of paper.
Then you arrive at Searle's conclusion:  Either I must be conscious of
speaking Chinese, or merely following an algorithm that results in
speaking Chinese does not entail consciousness, and hence
a simulation of consciousness might be perfect, but isn't
necessarily conscious.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread James Ratcliff
Yes, that is what I am aiming towards here, so do we have any Top-Level goals 
of this type, or are most all of these things merely signals, and the goal 
creation is at another level, and how do we deifn the top level goals.

James

Ben Goertzel <[EMAIL PROTECTED]> wrote: > To allow that somewhere in the 
Himalayas, someone may be able,
> with years of training, to lessen the urgency of hunger and
> pain,

An understatement: **I** can dramatically lessen the urgency of hunger
and pain.

What the right sort of training can do is to teach you to come very
close to not attending to them at all...

But I bet these gurus are not stopping the "pain" signals from
propagating from body to brain; they are likely "just" radically
altering the neural interpretation of these signals...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php
 
-
Need a quick answer? Get one in minutes from people who know. Ask your question 
on Yahoo! Answers.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread James Ratcliff
Can we go back to a simpler distictintion then, what are you defining "Goal" as?

I see the goal term, as a higher level reasoning 'tool' 
Wherin the body is constantly sending signals to our minds, but the goals are 
all created consciously or semi-conscisly.

Are you saying we should partition the "Top-Level" goals into some form of 
physical body - imposed goals and other types, or 
do you think we should leave it up to a single Constroller to interpret the 
signals coming from teh body and form the goals.

In humans it looks to be the one way, but with AGI's it appears it would/could 
be another.

James

Charles D Hixson <[EMAIL PROTECTED]> wrote: James Ratcliff wrote:
> There is a needed distinctintion that must be made here about hunger 
> as a goal stack motivator.
>
> We CANNOT change the hunger sensation, (short of physical 
> manipuations, or mind-control "stuff") as it is a given sensation that 
> comes directly from the physical body.
>
> What we can change is the placement in the goal stack, or the priority 
> position it is given.  We CAN choose to put it on the bottom of our 
> list of goals, or remove it from teh list and try and starve ourselves 
> to death.
>   Our body will then continuosly send the hunger signals to us, and we 
> must decide what how to handle that signal.
>
> So in general, the Signal is there, but the goal is not, it is under 
> our control.
>
> James Ratcliff
That's an important distinction, but I would assert that although one 
can insert goals "above" a built-in goal (hunger, e.g.), one cannot 
remove that goal.  There is a very long period when someone on a hunger 
strike must continually reinforce the goal of not-eating.  The goal of 
"satisfy hunger" is only removed when the body decides that it is 
unreachable (at the moment). 

The goal cannot be removed by intention, it can only be overridden and 
suppressed.  Other varieties of goal, volitionally chosen ones, can be 
volitionally revoked.  Even in such cases habit can cause the "automatic 
execution of tasks required to achieve the goal" to be continued.  I 
retired years ago, and although I no longer automatically get up at 5:30 
each morning, I still tend to arise before 8:00.  This is quite a 
contrast from my time in college when I would rarely arise before 9:00, 
and always felt I was getting up too early.  It's true that with a 
minimal effort I can change things so that I get up a (nearly?) any 
particular time...but as soon as I relax it starts drifting back to 
early morning.

Goals are important.  Some are built-in, some are changeable.  Habits 
are also important, perhaps nearly as much so.  Habits are initially 
created to satisfy goals, but when goals change, or circumstances alter, 
the habits don't automatically change in synchrony.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php
 
-
Everyone is raving about the all-new Yahoo! Mail beta.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Ben Goertzel

To allow that somewhere in the Himalayas, someone may be able,
with years of training, to lessen the urgency of hunger and
pain,


An understatement: **I** can dramatically lessen the urgency of hunger
and pain.

What the right sort of training can do is to teach you to come very
close to not attending to them at all...

But I bet these gurus are not stopping the "pain" signals from
propagating from body to brain; they are likely "just" radically
altering the neural interpretation of these signals...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Philip Goetz

On 12/4/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:

> The statement, "You cannot turn off hunger or pain" is sensible.
> In fact, it's one of the few statements in the English language that
> is LITERALLY so.  Philosophically, it's more certain than
> "I think, therefore I am".
>
> If you maintain your assertion, I'll put you in my killfile, because
> we cannot communicate.

It is reported that, with sufficiently advanced training in
appropriate mind-control arts (e.g. some Oriental ones), something
accurately describable as "turning off hunger or pain" becomes
possible, from a subjective experiential perspective.


To allow that somewhere in the Himalayas, someone may be able,
with years of training, to lessen the urgency of hunger and
pain, is not sufficient evidence to assert that the proposition
that not everyone can turn them off completely is insensible.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Charles D Hixson

James Ratcliff wrote:
There is a needed distinctintion that must be made here about hunger 
as a goal stack motivator.


We CANNOT change the hunger sensation, (short of physical 
manipuations, or mind-control "stuff") as it is a given sensation that 
comes directly from the physical body.


What we can change is the placement in the goal stack, or the priority 
position it is given.  We CAN choose to put it on the bottom of our 
list of goals, or remove it from teh list and try and starve ourselves 
to death.
  Our body will then continuosly send the hunger signals to us, and we 
must decide what how to handle that signal.


So in general, the Signal is there, but the goal is not, it is under 
our control.


James Ratcliff
That's an important distinction, but I would assert that although one 
can insert goals "above" a built-in goal (hunger, e.g.), one cannot 
remove that goal.  There is a very long period when someone on a hunger 
strike must continually reinforce the goal of not-eating.  The goal of 
"satisfy hunger" is only removed when the body decides that it is 
unreachable (at the moment). 

The goal cannot be removed by intention, it can only be overridden and 
suppressed.  Other varieties of goal, volitionally chosen ones, can be 
volitionally revoked.  Even in such cases habit can cause the "automatic 
execution of tasks required to achieve the goal" to be continued.  I 
retired years ago, and although I no longer automatically get up at 5:30 
each morning, I still tend to arise before 8:00.  This is quite a 
contrast from my time in college when I would rarely arise before 9:00, 
and always felt I was getting up too early.  It's true that with a 
minimal effort I can change things so that I get up a (nearly?) any 
particular time...but as soon as I relax it starts drifting back to 
early morning.


Goals are important.  Some are built-in, some are changeable.  Habits 
are also important, perhaps nearly as much so.  Habits are initially 
created to satisfy goals, but when goals change, or circumstances alter, 
the habits don't automatically change in synchrony.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Ben Goertzel

The statement, "You cannot turn off hunger or pain" is sensible.
In fact, it's one of the few statements in the English language that
is LITERALLY so.  Philosophically, it's more certain than
"I think, therefore I am".

If you maintain your assertion, I'll put you in my killfile, because
we cannot communicate.


It is reported that, with sufficiently advanced training in
appropriate mind-control arts (e.g. some Oriental ones), something
accurately describable as "turning off hunger or pain" becomes
possible, from a subjective experiential perspective.

I don't know if the physiological correlates of such experiences have
been studied.

Relatedly, though, I do know that physiological correlates of the
experience of "stopping breathing" that many meditators experience
have been found -- and the correlates were simple: when they thought
they were stopping breathing, the meditators were, in fact, either
stopping or drastically slowing their breathing...

Human potential goes way beyond what is commonly assumed based on our
ordinary states of mind ;-)

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Philip Goetz

On 12/4/06, Mark Waser <[EMAIL PROTECTED]> wrote:

> Why must you argue with everything I say?  Is this not a sensible
> statement?

I don't argue with everything you say.  I only argue with things that I
believe are wrong.  And no, the statements "You cannot turn off hunger or
pain.  You cannot control your emotions are *NOT* sensible at all.


Mark -

The statement, "You cannot turn off hunger or pain" is sensible.
In fact, it's one of the few statements in the English language that
is LITERALLY so.  Philosophically, it's more certain than
"I think, therefore I am".

If you maintain your assertion, I'll put you in my killfile, because
we cannot communicate.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Ben Goertzel

Regarding the definition of goals and supergoals, I have made attempts at:

http://www.agiri.org/wiki/index.php/Goal

http://www.agiri.org/wiki/index.php/Supergoal

The scope of human supergoals has been moderately well articulated by
Maslow IMO:

http://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs

BTW, I have borrowed from Stan Franklin the use of the term "drive" to
denote a built-in rather than learned supergoal:

http://www.agiri.org/wiki/index.php/Drive

-- Ben G


On 12/4/06, James Ratcliff <[EMAIL PROTECTED]> wrote:

Ok,
  Alot has been thrown around here about "Top-Level" goals, but no real
definition has been given, and I am confused as it seems to be covering alot
of ground for some people.

What 'level' and what are these top level goals for humans/AGI's?

It seems that "Staying Alive" is a big one, but that appears to contain
hunger/sleep/ and most other body level needs.

And how hard-wired are these goals, and how (simply) do we really hard-wire
them atall?

Our goal of staying alive appears to be "biologically preferred" or
something like that, but can definetly be overridden by depression / saving
a person in a burning building.

James Ratcliff


Ben Goertzel <[EMAIL PROTECTED]> wrote:
 IMO, humans **can** reprogram their top-level goals, but only with
difficulty. And this is correct: a mind needs to have a certain level
of maturity to really reflect on its own top-level goals, so that it
would be architecturally foolish to build a mind that involved
revision of supergoals at the infant/child phase.

However, without reprogramming our top-level goals, we humans still
have a lot of flexibility in our ultimate orientation. This is
because we are inconsistent systems: our top-level goals form a set of
not-entirely-consistent objectives... so we can shift from one
wired-in top-level goal to another, playing with the inconsistency.
(I note that, because the logic of the human mind is probabilistically
paraconsistent, the existence of inconsistency does not necessarily
imply that "all things are derivable" as it would in typical predicate
logic.)

Those of us who seek to become "as logically consistent as possible,
given the limitations of our computational infrastructure" have a
tough quest, because the human mind/brain is not wired for
consistency; and I suggest that this inconsistency pervades the human
wired-in supergoal set as well...

Much of the inconsistency within the human wired-in supergoal set has
to do with time-horizons. We are wired to want things in the short
term that contradict the things we are wired to want in the
medium/long term; and each of our mind/brains' self-organizing
dynamics needs to work out these evolutionarily-supplied
contradictions on its own One route is to try to replace our
inconsistent initial wiring with a more consistent supergoal set; the
more common route is to oscillate chaotically from one side of the
contradiction to the other...

(Yes, I am speaking loosely here rather than entirely rigorously; but
formalizing all this stuff would take a lot of time and space...)

-- Ben F


On 12/3/06, Matt Mahoney wrote:
>
> --- Mark Waser wrote:
>
> > > You cannot turn off hunger or pain. You cannot
> > > control your emotions.
> >
> > Huh? Matt, can you really not ignore hunger or pain? Are you really 100%
> > at the mercy of your emotions?
>
> Why must you argue with everything I say? Is this not a sensible
statement?
>
> > > Since the synaptic weights cannot be altered by
> > > training (classical or operant conditioning)
> >
> > Who says that synaptic weights cannot be altered? And there's endless
> > irrefutable evidence that the sum of synaptic weights is certainly
> > constantly altering by the directed die-off of neurons.
>
> But not by training. You don't decide to be hungry or not, because animals
> that could do so were removed from the gene pool.
>
> Is this not a sensible way to program the top level goals for an AGI?
>
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?list_id=303
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads!
http://www.falazar.com/projects/Torrents/tvtorrents_show.php

 
Need a quick answer? Get one in minutes from people who know. Ask your
question on Yahoo! Answers.
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id

Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread James Ratcliff
Ok,
  Alot has been thrown around here about "Top-Level" goals, but no real 
definition has been given, and I am confused as it seems to be covering alot of 
ground for some people.

What 'level' and what are these top level goals for humans/AGI's?

It seems that "Staying Alive" is a big one, but that appears to contain 
hunger/sleep/ and most other body level needs.

And how hard-wired are these goals, and how (simply) do we really hard-wire 
them atall?

Our goal of staying alive appears to be "biologically preferred" or something 
like that, but can definetly be overridden by depression / saving a person in a 
burning building.

James Ratcliff

Ben Goertzel <[EMAIL PROTECTED]> wrote: IMO, humans **can** reprogram their 
top-level goals, but only with
difficulty.  And this is correct: a mind needs to have a certain level
of maturity to really reflect on its own top-level goals, so that it
would be architecturally foolish to build a mind that involved
revision of supergoals at the infant/child phase.

However, without reprogramming our top-level goals, we humans still
have a lot of flexibility in our ultimate orientation.  This is
because we are inconsistent systems: our top-level goals form a set of
not-entirely-consistent objectives... so we can shift from one
wired-in top-level goal to another, playing with the inconsistency.
(I note that, because the logic of the human mind is probabilistically
paraconsistent, the existence of inconsistency does not necessarily
imply that "all things are derivable" as it would in typical predicate
logic.)

Those of us who seek to become "as logically consistent as possible,
given the limitations of our computational infrastructure" have a
tough quest, because the human mind/brain is not wired for
consistency; and I suggest that this inconsistency pervades the human
wired-in supergoal set as well...

Much of the inconsistency within the human wired-in supergoal set has
to do with time-horizons.  We are wired to want things in the short
term that contradict the things we are wired to want in the
medium/long term; and each of our mind/brains' self-organizing
dynamics needs to work out these evolutionarily-supplied
contradictions on its own  One route is to try to replace our
inconsistent initial wiring with a more consistent supergoal set; the
more common route is to oscillate chaotically from one side of the
contradiction to the other...

(Yes, I am speaking loosely here rather than entirely rigorously; but
formalizing all this stuff would take a lot of time and space...)

-- Ben F


On 12/3/06, Matt Mahoney  wrote:
>
> --- Mark Waser  wrote:
>
> > > You cannot turn off hunger or pain.  You cannot
> > > control your emotions.
> >
> > Huh?  Matt, can you really not ignore hunger or pain?  Are you really 100%
> > at the mercy of your emotions?
>
> Why must you argue with everything I say?  Is this not a sensible statement?
>
> > > Since the synaptic weights cannot be altered by
> > > training (classical or operant conditioning)
> >
> > Who says that synaptic weights cannot be altered?  And there's endless
> > irrefutable evidence that the sum of synaptic weights is certainly
> > constantly altering by the directed die-off of neurons.
>
> But not by training.  You don't decide to be hungry or not, because animals
> that could do so were removed from the gene pool.
>
> Is this not a sensible way to program the top level goals for an AGI?
>
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?list_id=303
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php
 
-
Need a quick answer? Get one in minutes from people who know. Ask your question 
on Yahoo! Answers.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread James Ratcliff
There is a needed distinctintion that must be made here about hunger as a goal 
stack motivator.

We CANNOT change the hunger sensation, (short of physical manipuations, or 
mind-control "stuff") as it is a given sensation that comes directly from the 
physical body. 

What we can change is the placement in the goal stack, or the priority position 
it is given.  We CAN choose to put it on the bottom of our list of goals, or 
remove it from teh list and try and starve ourselves to death.
  Our body will then continuosly send the hunger signals to us, and we must 
decide what how to handle that signal.

So in general, the Signal is there, but the goal is not, it is under our 
control.

James Ratcliff


Matt Mahoney <[EMAIL PROTECTED]> wrote: 
--- Mark Waser  wrote:

> > You cannot turn off hunger or pain.  You cannot
> > control your emotions.
> 
> Huh?  Matt, can you really not ignore hunger or pain?  Are you really 100% 
> at the mercy of your emotions?

Why must you argue with everything I say?  Is this not a sensible statement?

> > Since the synaptic weights cannot be altered by
> > training (classical or operant conditioning)
> 
> Who says that synaptic weights cannot be altered?  And there's endless 
> irrefutable evidence that the sum of synaptic weights is certainly 
> constantly altering by the directed die-off of neurons.

But not by training.  You don't decide to be hungry or not, because animals
that could do so were removed from the gene pool.

Is this not a sensible way to program the top level goals for an AGI?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php
 
-
Any questions?  Get answers on any topic at Yahoo! Answers. Try it now.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-04 Thread Hank Conn

Brian thanks for your response and Dr. Hall thanks for your post as well. I
will get around to responding to this as soon as time permits. I am
interested in what Michael Anissimov or Michael Wilson has to say.

On 12/4/06, Brian Atkins <[EMAIL PROTECTED]> wrote:


I think this is an interesting, important, and very incomplete subject
area, so
thanks for posting this. Some thoughts below.

J. Storrs Hall, PhD. wrote:
>
> Runaway recursive self-improvement
>
>
>> Moore's Law, underneath, is driven by humans.  Replace human
>> intelligence with superhuman intelligence, and the speed of computer
>> improvement will change as well.  Thinking Moore's Law will remain
>> constant even after AIs are introduced to design new chips is like
>> saying that the growth of tool complexity will remain constant even
>> after Homo sapiens displaces older homonid species.  Not so.  We are
>> playing with fundamentally different stuff.
>
> I don't think so. The singulatarians tend to have this mental model of a
> superintelligence that is essentially an analogy of the difference
between an
> animal and a human. My model is different. I think there's a level of
> universality, like a Turing machine for computation. The huge difference
> between us and animals is that we're universal and they're not, like the
> difference between an 8080 and an abacus. "superhuman" intelligence will
be
> faster but not fundamentally different (in a sense), like the difference
> between an 8080 and an Opteron.
>
> That said, certainly Moore's law will speed up given fast AI. But having
one
> human-equivalent AI is not going to make any more different than having
one
> more engineer. Having a thousand-times-human AI won't get you more than
> having 1000 engineers. Only when you can substantially augment the total
> brainpower working on the problem will you begin to see significant
effects.

Putting aside the speed differential which you accept, but dismiss as
important
for RSI, isn't there a bigger issue you're skipping regarding the other
differences between an Opteron-level PC and an 8080-era box? For example,
there
are large differences in the addressable memory amounts. This might for
instance
mean whereas a very good example of a human can study and become a true
expert
in perhaps a handful of fields, a SI may be able to be a true expert in
many
more fields simultaneously and to a more exhaustive degree than a human.
Will
this lead to the SI making more breakthroughs per given amount of runtime?
Does
it multiply with the speed differential?

Also, what is really the difference between an Einstein/Feynman brain, and
someone with an 80 IQ? It doesn't appear that E/F's brains run simply
slightly
faster, or likewise that they simply know more facts. There's something
else
isn't there? Call it a slightly better architecture or maybe only certain
brain
parts are a bit better, but this would seem to be a 4th issue to consider
besides the previously raised points of speed, "memory capacity", and
"universality". I'm sure we can come up with other things too.

(Btw, the preferred spelling is "singularitarian"; it gets most google
hits by
far from what I can tell. Also btw the term arguably now refers more
specifically to someone who wants to work on accelerating the singularity,
so
you probably can't group in here every single person who simply believes a
singularity is possible or coming.)

>
>> If modest differences in size, brain structure, and
>> self-reprogrammability make the difference between chimps and humans
>> capable of advanced technological activity, then fundamental
>> differences in these qualities between humans and AIs will lead to a
>> much larger gulf, right away.
>
> Actually Neanderthals had brains bigger than ours by 10%, and we blew
them off
> the face of the earth. They had virtually no innovation in 100,000
years; we
> went from paleolithic to nanotech in 30,000. I'll bet we were universal
and
> they weren't.
>
> Virtually every "advantage" in Elie's list is wrong. The key is to
realize
> that that we do all these things, just more slowly than we imagine
machines
> being able to do them:
>
>> Our source code is not reprogrammable.
>
> We are extremely programmable. The vast majority of skills we use
day-to-day
> are learned. If you watched me tie a sheepshank knot a few times, you
would
> most likely then be able to tie one yourself.
>
> Note by the way that having to "recompile" new knowledge is a big
security
> advantage for the human architecture, as compared with downloading
blackbox
> code and running it sight unseen...

This is missing the point entirely isn't it? Learning skills is using your
existing physical brain design, but not modifying its overall or even
localized
architecture or modifying "what makes it work". When source code is
mentioned,
we're talking a lower level down.

Can you cause your brain to temporarily shut down your visual cortex and
other
associated visual parts, reallocate them to expanding you

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Mark Waser
Why must you argue with everything I say?  Is this not a sensible 
statement?


I don't argue with everything you say.  I only argue with things that I 
believe are wrong.  And no, the statements "You cannot turn off hunger or 
pain.  You cannot control your emotions are *NOT* sensible at all.



You don't decide to be hungry or not, because animals
that could do so were removed from the gene pool.


Funny, I always thought that it was the animals that continued eating while 
being stalked were the ones that were removed from the gene pool (suddenly 
and bloodily).  Yes, you eventually have to feed yourself or you die and 
animals mal-adapted enough to not feed themselves will no longer contribute 
to the gene pool, but can you disprove the equally likely contention that 
animals eat because it is very pleasurable to them and that they never feel 
hunger (or do you only have sex because it hurts when you don't)?



Is this not a sensible way to program the top level goals for an AGI?


No.  It's a terrible way to program the top level goals for an AGI.  It 
leads to wireheading, short-circuiting of true goals for faking out the 
evaluation criteria, and all sorts of other problems.


- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Sunday, December 03, 2006 10:19 PM
Subject: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it 
and how fast?]





--- Mark Waser <[EMAIL PROTECTED]> wrote:


> You cannot turn off hunger or pain.  You cannot
> control your emotions.

Huh?  Matt, can you really not ignore hunger or pain?  Are you really 
100%

at the mercy of your emotions?


Why must you argue with everything I say?  Is this not a sensible 
statement?



> Since the synaptic weights cannot be altered by
> training (classical or operant conditioning)

Who says that synaptic weights cannot be altered?  And there's endless
irrefutable evidence that the sum of synaptic weights is certainly
constantly altering by the directed die-off of neurons.


But not by training.  You don't decide to be hungry or not, because 
animals

that could do so were removed from the gene pool.

Is this not a sensible way to program the top level goals for an AGI?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-04 Thread Mike Dougherty

On 12/4/06, Brian Atkins <[EMAIL PROTECTED]> wrote:


Can you cause your brain to temporarily shut down your visual cortex and
other
associated visual parts, reallocate them to expanding your working memory
by
four times its current size in order to help you juggle consciously the
bits you
need to solve a particularly tough problem? No.



I can close my eyes in order to visualize a geometric association or spatial
relationship...

When I fall asleep and dream about a solution to a problem that I am working
on, there are 'alternate' cognitive processes being performed.

I know... I'm just playing devil's advocate.  :)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-03 Thread Brian Atkins
I think this is an interesting, important, and very incomplete subject area, so 
thanks for posting this. Some thoughts below.


J. Storrs Hall, PhD. wrote:


Runaway recursive self-improvement



Moore's Law, underneath, is driven by humans.  Replace human
intelligence with superhuman intelligence, and the speed of computer
improvement will change as well.  Thinking Moore's Law will remain
constant even after AIs are introduced to design new chips is like
saying that the growth of tool complexity will remain constant even
after Homo sapiens displaces older homonid species.  Not so.  We are
playing with fundamentally different stuff.


I don't think so. The singulatarians tend to have this mental model of a 
superintelligence that is essentially an analogy of the difference between an 
animal and a human. My model is different. I think there's a level of 
universality, like a Turing machine for computation. The huge difference 
between us and animals is that we're universal and they're not, like the 
difference between an 8080 and an abacus. "superhuman" intelligence will be 
faster but not fundamentally different (in a sense), like the difference 
between an 8080 and an Opteron.


That said, certainly Moore's law will speed up given fast AI. But having one 
human-equivalent AI is not going to make any more different than having one 
more engineer. Having a thousand-times-human AI won't get you more than 
having 1000 engineers. Only when you can substantially augment the total 
brainpower working on the problem will you begin to see significant effects.


Putting aside the speed differential which you accept, but dismiss as important 
for RSI, isn't there a bigger issue you're skipping regarding the other 
differences between an Opteron-level PC and an 8080-era box? For example, there 
are large differences in the addressable memory amounts. This might for instance 
mean whereas a very good example of a human can study and become a true expert 
in perhaps a handful of fields, a SI may be able to be a true expert in many 
more fields simultaneously and to a more exhaustive degree than a human. Will 
this lead to the SI making more breakthroughs per given amount of runtime? Does 
it multiply with the speed differential?


Also, what is really the difference between an Einstein/Feynman brain, and 
someone with an 80 IQ? It doesn't appear that E/F's brains run simply slightly 
faster, or likewise that they simply know more facts. There's something else 
isn't there? Call it a slightly better architecture or maybe only certain brain 
parts are a bit better, but this would seem to be a 4th issue to consider 
besides the previously raised points of speed, "memory capacity", and 
"universality". I'm sure we can come up with other things too.


(Btw, the preferred spelling is "singularitarian"; it gets most google hits by 
far from what I can tell. Also btw the term arguably now refers more 
specifically to someone who wants to work on accelerating the singularity, so 
you probably can't group in here every single person who simply believes a 
singularity is possible or coming.)





If modest differences in size, brain structure, and
self-reprogrammability make the difference between chimps and humans
capable of advanced technological activity, then fundamental
differences in these qualities between humans and AIs will lead to a
much larger gulf, right away.


Actually Neanderthals had brains bigger than ours by 10%, and we blew them off 
the face of the earth. They had virtually no innovation in 100,000 years; we 
went from paleolithic to nanotech in 30,000. I'll bet we were universal and 
they weren't.


Virtually every "advantage" in Elie's list is wrong. The key is to realize 
that that we do all these things, just more slowly than we imagine machines 
being able to do them:


Our source code is not reprogrammable.  


We are extremely programmable. The vast majority of skills we use day-to-day 
are learned. If you watched me tie a sheepshank knot a few times, you would 
most likely then be able to tie one yourself.


Note by the way that having to "recompile" new knowledge is a big security 
advantage for the human architecture, as compared with downloading blackbox 
code and running it sight unseen...


This is missing the point entirely isn't it? Learning skills is using your 
existing physical brain design, but not modifying its overall or even localized 
architecture or modifying "what makes it work". When source code is mentioned, 
we're talking a lower level down.


Can you cause your brain to temporarily shut down your visual cortex and other 
associated visual parts, reallocate them to expanding your working memory by 
four times its current size in order to help you juggle consciously the bits you 
need to solve a particularly tough problem? No. Can you reorganize how certain 
memories are stored so they would have 5 times as much fidelity or losslessly 
capture certain sensory inputs for a given per

Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-03 Thread Ben Goertzel

IMO, humans **can** reprogram their top-level goals, but only with
difficulty.  And this is correct: a mind needs to have a certain level
of maturity to really reflect on its own top-level goals, so that it
would be architecturally foolish to build a mind that involved
revision of supergoals at the infant/child phase.

However, without reprogramming our top-level goals, we humans still
have a lot of flexibility in our ultimate orientation.  This is
because we are inconsistent systems: our top-level goals form a set of
not-entirely-consistent objectives... so we can shift from one
wired-in top-level goal to another, playing with the inconsistency.
(I note that, because the logic of the human mind is probabilistically
paraconsistent, the existence of inconsistency does not necessarily
imply that "all things are derivable" as it would in typical predicate
logic.)

Those of us who seek to become "as logically consistent as possible,
given the limitations of our computational infrastructure" have a
tough quest, because the human mind/brain is not wired for
consistency; and I suggest that this inconsistency pervades the human
wired-in supergoal set as well...

Much of the inconsistency within the human wired-in supergoal set has
to do with time-horizons.  We are wired to want things in the short
term that contradict the things we are wired to want in the
medium/long term; and each of our mind/brains' self-organizing
dynamics needs to work out these evolutionarily-supplied
contradictions on its own  One route is to try to replace our
inconsistent initial wiring with a more consistent supergoal set; the
more common route is to oscillate chaotically from one side of the
contradiction to the other...

(Yes, I am speaking loosely here rather than entirely rigorously; but
formalizing all this stuff would take a lot of time and space...)

-- Ben F


On 12/3/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:


--- Mark Waser <[EMAIL PROTECTED]> wrote:

> > You cannot turn off hunger or pain.  You cannot
> > control your emotions.
>
> Huh?  Matt, can you really not ignore hunger or pain?  Are you really 100%
> at the mercy of your emotions?

Why must you argue with everything I say?  Is this not a sensible statement?

> > Since the synaptic weights cannot be altered by
> > training (classical or operant conditioning)
>
> Who says that synaptic weights cannot be altered?  And there's endless
> irrefutable evidence that the sum of synaptic weights is certainly
> constantly altering by the directed die-off of neurons.

But not by training.  You don't decide to be hungry or not, because animals
that could do so were removed from the gene pool.

Is this not a sensible way to program the top level goals for an AGI?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-03 Thread Matt Mahoney

--- Mark Waser <[EMAIL PROTECTED]> wrote:

> > You cannot turn off hunger or pain.  You cannot
> > control your emotions.
> 
> Huh?  Matt, can you really not ignore hunger or pain?  Are you really 100% 
> at the mercy of your emotions?

Why must you argue with everything I say?  Is this not a sensible statement?

> > Since the synaptic weights cannot be altered by
> > training (classical or operant conditioning)
> 
> Who says that synaptic weights cannot be altered?  And there's endless 
> irrefutable evidence that the sum of synaptic weights is certainly 
> constantly altering by the directed die-off of neurons.

But not by training.  You don't decide to be hungry or not, because animals
that could do so were removed from the gene pool.

Is this not a sensible way to program the top level goals for an AGI?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-03 Thread Mark Waser

You cannot turn off hunger or pain.  You cannot
control your emotions.


Huh?  Matt, can you really not ignore hunger or pain?  Are you really 100% 
at the mercy of your emotions?



Since the synaptic weights cannot be altered by
training (classical or operant conditioning)


Who says that synaptic weights cannot be altered?  And there's endless 
irrefutable evidence that the sum of synaptic weights is certainly 
constantly altering by the directed die-off of neurons.




- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Saturday, December 02, 2006 9:42 PM
Subject: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it 
and how fast?]




--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

I am disputing the very idea that monkeys (or rats or pigeons or humans)
have a "part of the brain which generates the reward/punishment signal
for operant conditioning."

This is behaviorism.  I find myself completely at a loss to know where
to start, if I have to explain what is wrong with behaviorism.


Call it what you want.  I am arguing that there are parts of the brain 
(e.g.

the nucleus accumbens) responsible for reinforcement learning, and
furthermore, that the synapses along the input paths to these regions are 
not

trainable.  I argue this has to be the case because an intelligent system
cannot be allowed to modify its motivational system.  Our most fundamental
models of intelligent agents require this (e.g. AIXI -- the reward signal 
is
computed by the environment).  You cannot turn off hunger or pain.  You 
cannot

control your emotions.  Since the synaptic weights cannot be altered by
training (classical or operant conditioning), they must be hardwired as
determined by your DNA.

Do you agree?  If not, what part of this argument do you disagree with?

That reward and punishment exist and result in learning in humans?

That there are neurons dedicated to computing reinforcement signals?

That the human motivational system (by which I mean the logic of computing 
the

reinforcement signals from sensory input) is not trainable?

That the motivational system is completely specified by DNA?

That all human learning can be reduced to classical and operant 
conditioning?


That humans are animals that differ only in the ability to learn language?

That models of goal seeking agents like AIXI are realistic models of
intelligence?

Do you object to behavioralism because of their view that consciousness 
and

free will do not exist, except as beliefs?

Do you object to the assertion that the brain is a computer with finite 
memory
and speed?  That your life consists of running a program?  Is this wrong, 
or

just uncomfortable?




-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-03 Thread Charles D Hixson

Mark Waser wrote:

...
For me, yes, all of those things are good since they are on my list of 
goals *unless* the method of accomplishing them steps on a higher goal 
OR a collection of goals with greater total weight OR violates one of 
my limitations (restrictions).

...

If you put every good thing on your "list of goals", then you will have 
a VERY long list.
I would propose that most of those items listed should be derived goals 
rather than anything primary.  And that the primary goals should be 
rather few.  I'm certain that three is too few.  Probably it should be 
fewer than 200.  The challenge is so phrasing them that they:

1) cover every needed situation
2) are short enough to be debugable
They should probably be divided into two sets.  One set would be a list 
of goals to be aimed for, and the other would be a list of filters that 
had to be passed.


Think of these as the axioms on which the mind is being erected.  Axioms 
need to be few and simple, it's the theorums that are derived from them 
that get complicated.


N.B.:  This is an ANALOGY.  I'm not proposing a theorum prover as the 
model of an AI.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-03 Thread Philip Goetz

On 12/2/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:

I am disputing the very idea that monkeys (or rats or pigeons or humans)
have a "part of the brain which generates the reward/punishment signal
for operant conditioning."


Well, there is a part of the brain which generates a
temporal-difference signal for reinforcement learning.  Not so very
different.  At least, not different enough for this brain mechanism to
escape having Richard's scorn heaped upon it.

http://www.iro.umontreal.ca/~lisa/pointeurs/RivestNIPS2004.pdf

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> I am disputing the very idea that monkeys (or rats or pigeons or humans) 
> have a "part of the brain which generates the reward/punishment signal 
> for operant conditioning."
> 
> This is behaviorism.  I find myself completely at a loss to know where 
> to start, if I have to explain what is wrong with behaviorism.

Call it what you want.  I am arguing that there are parts of the brain (e.g.
the nucleus accumbens) responsible for reinforcement learning, and
furthermore, that the synapses along the input paths to these regions are not
trainable.  I argue this has to be the case because an intelligent system
cannot be allowed to modify its motivational system.  Our most fundamental
models of intelligent agents require this (e.g. AIXI -- the reward signal is
computed by the environment).  You cannot turn off hunger or pain.  You cannot
control your emotions.  Since the synaptic weights cannot be altered by
training (classical or operant conditioning), they must be hardwired as
determined by your DNA.

Do you agree?  If not, what part of this argument do you disagree with?

That reward and punishment exist and result in learning in humans?

That there are neurons dedicated to computing reinforcement signals?

That the human motivational system (by which I mean the logic of computing the
reinforcement signals from sensory input) is not trainable?

That the motivational system is completely specified by DNA?

That all human learning can be reduced to classical and operant conditioning?

That humans are animals that differ only in the ability to learn language?

That models of goal seeking agents like AIXI are realistic models of
intelligence?

Do you object to behavioralism because of their view that consciousness and
free will do not exist, except as beliefs?

Do you object to the assertion that the brain is a computer with finite memory
and speed?  That your life consists of running a program?  Is this wrong, or
just uncomfortable?




-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Ben Goertzel

David...

On 11/29/06, David Hart <[EMAIL PROTECTED]> wrote:

On 11/30/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> Richard,
>
> This is certainly true, and is why in Novamente we use a goal stack
> only as one aspect of cognitive control...

Ben,

Could you elaborate for the list some of the nuances between [explicit]
cognitive control and [implicit] cognitive bias, either theoretically or
within Novamente?

David


Well, there is nothing  too obscure in the way explicit
goal-achievement dynamics and implicit goal-achievement dynamics
co-exist in Novamente...

Quite simply, in the NM system as it is now (and as it is planned to
be in the reasonably near future), explicit goal achievement is one
among many dynamics.  There are also many "ambient" cognitive
processes that the system just does "because that's the way it was
created."  These include a certain level of reasoning, concept
formation, procedure learning, etc.

It is anticipated that ultimately, once a NM system becomes
sufficiently advanced, explicit goal-achievement may be allowed to
extend across all aspects of the system.  But this does not make sense
initially for the reason Richard Loosemore pointed out: A baby does
not have the knowledge to reason "If I don't act nice to my mommy, I
may be neglected and and die, therefore I should be nice to my mommy
because it is a subgoal of my supergoal of staying alive."  It doesn't
have the knowledge to figure out precisely how to be nice to its mommy
either.  Instead, a baby needs to be preprogrammed with the desire to
be nice to its mommy, and with specific behaviors along these lines.
A lot of preprogrammed stuff -- including preprogrammed **learning
dynamics** -- seem to be necessary to get a realistic mind to the
level where it can achieve complex goals with a reasonable degree of
flexibility and effectiveness.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Richard Loosemore



--- Richard Loosemore <[EMAIL PROTECTED]> wrote:


Matt Mahoney wrote:
> I guess we are arguing terminology.  I mean that the part of the brain
which
> generates the reward/punishment signal for operant conditioning is not
> trainable.  It is programmed only through evolution.

There is no such thing.  This is the kind of psychology that died out at
least thirty years ago (with the exception of a few diehards in North
Wales and Cambridge).


Are we arguing terminology again, or are you really saying that 
animals cannot
be trained using reward and punishment?  By "operant conditioning", I 
mean

reinforcement learning.

I realize that monkeys can be trained to work for tokens that can be 
exchanged
for food.  When I say that the motivational logic cannot be trained, I 
mean
the connection from food to reward, not from tokens to reward.  What 
you are

training is the association of tokens to food and work to tokens.


Mark Waser wrote:
> He's arguing with the phrase "It is programmed only through evolution."
>
> If I'm wrong and he is not, I certainly am.

I certainly agree with Mark on this point (I dispute Matt's contention 
that "It is programmed only through evolution") but that was only a 
subset of my main disagreement.


I am disputing the very idea that monkeys (or rats or pigeons or humans) 
have a "part of the brain which generates the reward/punishment signal 
for operant conditioning."


This is behaviorism.  I find myself completely at a loss to know where 
to start, if I have to explain what is wrong with behaviorism.


This is the 21st century, and we have had cognitive psychology for, 
what?, fifty years now?  Cognitive psychology was born when people 
suddenly realized that the behaviorist conception of the mechanisms of 
mind was ridiculously stupid.


As a superficial model of how to control the behavior of rats, it works 
great.  (It even works for some aspects of the behavior of children). 
But as a model of the *mechanisms* that make up a thinking system?  I 
can't think of words expressive enough to convey my contempt for the 
idea.  The people who invented behaviorism managed to shut the science 
of psychology down for about three or four decades, so I don't look very 
charitably on what they did.


If someone said to you that "All computers in the world today actually 
work by having a mechanism inside them that looks at the current inputs 
(from mice, keyboard, etc) and then produces an output by getting the 
correct response for that input from an internal lookup table" you would 
be at a loss to know how to go about fixing that person's broken 
conception of the machinery of computation.  You would probably tell 
them to go read some *really* basic textbooks about computers, then get 
a degree in the subject if necessary, then come back when their ideas 
had gotten straightened out.


[Don't take the above analogy too literally, btw:  I know about the 
differences between look up tables and literal behaviorism I was 
merely conveying the depth of ignorance of mechanism that the two sets 
of ideas share].


And, no, this is nothing whatsoever to do with "terminology" (nor have I 
ever simply argued about terminology in the past, as you imply).



Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Richard Loosemore

Philip Goetz wrote:

On 12/1/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:


The questions you asked above are predicated on a goal stack approach.

You are repeating the same mistakes that I already dealt with.


Some people would call it "repeating the same mistakes I already dealt 
with".

Some people would call it "continuing to disagree".  :)


Some people would call it "continuing to disagree because they haven't 
yet figured out that their argument has been undermined."


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Mark Waser

He's arguing with the phrase "It is programmed only through evolution."

If I'm wrong and he is not, I certainly am.

- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Saturday, December 02, 2006 4:26 PM
Subject: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it 
and how fast?]





--- Richard Loosemore <[EMAIL PROTECTED]> wrote:


Matt Mahoney wrote:
> I guess we are arguing terminology.  I mean that the part of the brain
which
> generates the reward/punishment signal for operant conditioning is not
> trainable.  It is programmed only through evolution.

There is no such thing.  This is the kind of psychology that died out at
least thirty years ago (with the exception of a few diehards in North
Wales and Cambridge).


Are we arguing terminology again, or are you really saying that animals 
cannot

be trained using reward and punishment?  By "operant conditioning", I mean
reinforcement learning.

I realize that monkeys can be trained to work for tokens that can be 
exchanged
for food.  When I say that the motivational logic cannot be trained, I 
mean
the connection from food to reward, not from tokens to reward.  What you 
are

training is the association of tokens to food and work to tokens.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Matt Mahoney

--- Hank Conn <[EMAIL PROTECTED]> wrote:

> On 12/1/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> >
> >
> > --- Hank Conn <[EMAIL PROTECTED]> wrote:
> >
> > > On 12/1/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > > > I suppose the alternative is to not scan brains, but then you still
> > have
> > > > death, disease and suffering.  I'm sorry it is not a happy picture
> > either
> > > > way.
> > >
> > >
> > > Or you have no death, disease, or suffering, but not wireheading.
> >
> > How do you propose to reduce the human mortality rate from 100%?
> 
> 
> Why do you ask?

You seemed to imply you knew an alternative to brain scanning, or did I
misunderstand?



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Matt Mahoney

--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> Matt Mahoney wrote:
> > I guess we are arguing terminology.  I mean that the part of the brain
> which
> > generates the reward/punishment signal for operant conditioning is not
> > trainable.  It is programmed only through evolution.
> 
> There is no such thing.  This is the kind of psychology that died out at 
> least thirty years ago (with the exception of a few diehards in North 
> Wales and Cambridge).

Are we arguing terminology again, or are you really saying that animals cannot
be trained using reward and punishment?  By "operant conditioning", I mean
reinforcement learning.

I realize that monkeys can be trained to work for tokens that can be exchanged
for food.  When I say that the motivational logic cannot be trained, I mean
the connection from food to reward, not from tokens to reward.  What you are
training is the association of tokens to food and work to tokens.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-02 Thread J. Storrs Hall, PhD.
On Saturday 02 December 2006 13:57, Mark Waser wrote:
> Thank you for cross-posting this.  Could you please give us more
> information on your book?
>
> I must also say that I appreciate the common-sense wisdom and repeated bon
> mots that the "sky is falling" crowd seem to lack.
>
> - Original Message -
> From: "J. Storrs Hall, PhD." <[EMAIL PROTECTED]>
> Subject: Re: [agi] RSI - What is it and how fast?
>
> > I've just finished a book on this subject, (coming out in May from
> > Prometheus). ...

Thanks!  

The book, under the title "Beyond AI: Creating the Conscience of the Machine", 
is an outgrowth of my Ethics for Machines paper 
(http://autogeny.org/ethics.html). It covers the history of AI, my opinions 
on how far it needs to go and what it needs to do to create a really 
intelligent system, and what needs to be done for the system to be a moral 
agent and how good we can expect it to be.   

It's at the publisher's now, they have a fairly long latency to store shelves. 
They are nationally distributed, tho -- I could walk into any Borders or B&N 
in the country and find my previous book, Nanofuture, on the shelf.

One of the big puzzles in AI that has bothered me since the 70s is what 
happened to cybernetics and why AI and cybernetics weren't "consilient." In 
the process of research I found the answer, and it's a weird one. 

--Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Philip Goetz

On 12/2/06, Mark Waser <[EMAIL PROTECTED]> wrote:


> Philip Goetz snidely responded
> Some people would call it "repeating the same mistakes I already dealt
> with".
> Some people would call it "continuing to disagree".  :)

Richard's point was that the poster was simply repeating previous points
without responding to Richard's arguments.  Responsible "contining to
disagree" would have included, at least, acknowledging and responding to or
arguing with Richard's points.


It would have, if Matt were replying to Richard.  However, he was
replying to Hank.

However, I'll make my snide remarks directly to the poster in the future.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Mark Waser

On 12/1/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:

The questions you asked above are predicated on a goal stack approach.
You are repeating the same mistakes that I already dealt with.



Philip Goetz snidely responded
Some people would call it "repeating the same mistakes I already dealt 
with".

Some people would call it "continuing to disagree".  :)


   Richard's point was that the poster was simply repeating previous points 
without responding to Richard's arguments.  Responsible "contining to 
disagree" would have included, at least, acknowledging and responding to or 
arguing with Richard's points.  Not doing so is simply an "is too/is not" 
argument. 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Mark Waser
I'd be interested in knowing if anyone else on this list has had any 
experience with policy-based governing . . . .


Questions like

Are the following things good?
- End of disease.
- End of death.
- End of pain and suffering.
- A paradise where all of your needs are met and wishes fulfilled.
can only be properly answered by reference to your *ordered* list of goals 
*WITH* reference to your list of limitations (restrictions, to use the 
lingo).


For me, yes, all of those things are good since they are on my list of goals 
*unless* the method of accomplishing them steps on a higher goal OR a 
collection of goals with greater total weight OR violates one of my 
limitations (restrictions).


As long as I'm intelligent enough to put things like "don't do anything to 
me without my informed consent" on my limitations list, I don't expect too 
many problems (and certainly not any of the problems that were brought up 
later in the "Questions" post).


Personally, I find the level of many of these morality discussions 
ridiculous.  It is relatively easy for any competent systems architect to 
design complete, internally consistent systems of morality from sets of 
goals and restrictions.  The problem is that any such system is just not 
going to match what everybody wants since everybody embodies conflicting and 
irreconcilable requirements.


Richard's system of evolved morality through a large number of diffuse 
constraints is a good attempt at creating a morality system that is unlikely 
to offend anyone while still making "reasonable" decisions about contested 
issues.  The problem with Richard's system is that it may well make 
decisions like outlawing stem cell research since so many people are against 
it (or maybe, if it is sufficiently intelligent, it's internal consistency 
routines may reduce the weight of the arguments from people who insist upon 
conflicting priorities like "I want the longest life and best possible 
medical care" and "I don't want stem cell research").


The good point about an internally consistent system designed by me is that 
it's not going to outlaw stem cell research.  The bad point about my system 
is that it's going to offend *a lot* of people and if it's someone else's 
system, it may well offend (or exterminate) me.  And, I must say, based upon 
the level of many of these discussions, the thought of a lot of you 
designing a morality system is *REALLY* frightening.


- Original Message - 
From: "Richard Loosemore" <[EMAIL PROTECTED]>

To: 
Sent: Friday, December 01, 2006 11:56 PM
Subject: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it 
and how fast?]




Matt Mahoney wrote:

--- Hank Conn <[EMAIL PROTECTED]> wrote:
The further the actual target goal state of that particular AI is away 
from

the actual target goal state of humanity, the worse.

The goal of ... humanity... is that the AGI implemented that will have 
the
strongest RSI curve also will be such that its actual target goal state 
is

exactly congruent to the actual target goal state of humanity.


This was discussed on the Singularity list.  Even if we get the 
motivational
system and goals right, things can still go badly.  Are the following 
things

good?

- End of disease.
- End of death.
- End of pain and suffering.
- A paradise where all of your needs are met and wishes fulfilled.

You might think so, and program an AGI with these goals.  Suppose the AGI
figures out that by scanning your brain and copying the information into 
a
computer and making many redundant backups, that you become immortal. 
Furthermore, once your consciousness becomes a computation in silicon, 
your

universe can be simulated to be anything you want it to be.


See my previous lengthy post on the subject of motivational systems vs 
"goal stack" systems.


The questions you asked above are predicated on a goal stack approach.

You are repeating the same mistakes that I already dealt with.


Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-02 Thread Mark Waser
Thank you for cross-posting this.  Could you please give us more information 
on your book?


I must also say that I appreciate the common-sense wisdom and repeated bon 
mots that the "sky is falling" crowd seem to lack.


- Original Message - 
From: "J. Storrs Hall, PhD." <[EMAIL PROTECTED]>

To: 
Sent: Friday, December 01, 2006 7:52 PM
Subject: Re: [agi] RSI - What is it and how fast?



I've just finished a book on this subject, (coming out in May from
Prometheus). I also had an extended conversation/argument about it with
some smart people on another mailing list, of which I reproduce the
juicy parts here: (quoted inclusions the other guys, straight text me)



Michael Vassar has been working very hard to convince us that one kind of
emergent entity--General Artificial Intelligence--will kill us all,
automatically, inevitably, by accident, as soon as we try to use it.


Having just finished writing a book on the subject, I have a few 
observations:


True general AI probably shows up in the next decade, well before personal
nanofactories. AI is gathering steam, and has 50 years of design-ahead 
lying

around waiting to be taken advantage of. Nanotech is also picking up steam
(and of course I mean Drexlerian nanotech) but more slowly and there is a
relatively tiny amount of design-ahead going on.

Currently a HEPP (human-equivalent processing power) in the form of an IBM
Blue Gene costs $30 million. In a decade it will cost the same as a new 
car;
hobbyists and small businesses will be able to afford them. However, the 
AI

software available will be new, still experimental, and have a lot of
learning to do. (It'll be AGI because it will be able to do the learning)

Another decade will elapse before the hardware and software combine to 
produce
AGIs as productive as a decent-sized company. Another one yet before they 
are

competitive with a national economy. In other words, for 30 years (give or
take) they will have to live and work in a human economy, and will find it
much more beneficial to work within the economy than try to do an end run.

Unless somebody does something REALLY STUPID like putting the government 
in

charge of them, there will be a huge diverse population of AGIs continuing
the market dynamic (and forming the environment that keeps each other in
check) by the time they become unequivocally hyperhuman. By huge and 
diverse
I mean billions, and ranging in intelligence from subhuman (Roomba ca 
2030)

on up.

Cooperation and other moral traits have evolved in humans (and other 
animals)

because it is more beneficial than the "war of each against all." AIs can
know this (they'll be able to read, you know) and form "mutual beneficence
societies" of the kind that developed naturally, but do it intentionally,
reliably, and fast. Such societies would be more efficient than the rest 
of
the competing world of AIs. There's no reason that trustworthy humans 
might

not be admitted into such societies as well.

Cooperation and the Hobbesian War are both stable evolutionary strategies. 
If

we are so stupid as to start the AIs out in the Hobbsian one, we deserve
whatever we get. (And kindly notice that the "Friendly AI" scheme does
exactly that...)

May you live in interesting times.



Runaway recursive self-improvement



Moore's Law, underneath, is driven by humans.  Replace human
intelligence with superhuman intelligence, and the speed of computer
improvement will change as well.  Thinking Moore's Law will remain
constant even after AIs are introduced to design new chips is like
saying that the growth of tool complexity will remain constant even
after Homo sapiens displaces older homonid species.  Not so.  We are
playing with fundamentally different stuff.


I don't think so. The singulatarians tend to have this mental model of a
superintelligence that is essentially an analogy of the difference between 
an

animal and a human. My model is different. I think there's a level of
universality, like a Turing machine for computation. The huge difference
between us and animals is that we're universal and they're not, like the
difference between an 8080 and an abacus. "superhuman" intelligence will 
be

faster but not fundamentally different (in a sense), like the difference
between an 8080 and an Opteron.

That said, certainly Moore's law will speed up given fast AI. But having 
one
human-equivalent AI is not going to make any more different than having 
one

more engineer. Having a thousand-times-human AI won't get you more than
having 1000 engineers. Only when you can substantially augment the total
brainpower working on the problem will you begin to see significant 
effects.



If modest differences in size, brain structure, and
self-reprogrammability make the difference between chimps and humans
capable of advanced technological activity, then fundamental

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Hank Conn

On 12/1/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:



--- Hank Conn <[EMAIL PROTECTED]> wrote:

> On 12/1/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > The "goals of humanity", like all other species, was determined by
> > evolution.
> > It is to propagate the species.
>
>
> That's not the goal of humanity. That's the goal of the evolution of
> humanity, which has been defunct for a while.

We have slowed evolution through medical advances, birth control and
genetic
engineering, but I don't think we have stopped it completely yet.

> You are confusing this abstract idea of an optimization target with the
> actual motivation system. You can change your motivation system all you
> want, but you woulnd't (intentionally) change the fundamental
specification
> of the optimization target which is maintained by the motivation system
as a
> whole.

I guess we are arguing terminology.  I mean that the part of the brain
which
generates the reward/punishment signal for operant conditioning is not
trainable.  It is programmed only through evolution.

>   To some extent you can do this.  When rats can
> > electrically stimulate their nucleus accumbens by pressing a lever,
they
> > do so
> > nonstop in preference to food and water until they die.
> >
> > I suppose the alternative is to not scan brains, but then you still
have
> > death, disease and suffering.  I'm sorry it is not a happy picture
either
> > way.
>
>
> Or you have no death, disease, or suffering, but not wireheading.

How do you propose to reduce the human mortality rate from 100%?



Why do you ask?

-hank


-- Matt Mahoney, [EMAIL PROTECTED]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Philip Goetz

On 12/1/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:


The questions you asked above are predicated on a goal stack approach.

You are repeating the same mistakes that I already dealt with.


Some people would call it "repeating the same mistakes I already dealt with".
Some people would call it "continuing to disagree".  :)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread J. Storrs Hall, PhD.
On Friday 01 December 2006 23:42, Richard Loosemore wrote:

> It's a lot easier than you suppose.  The system would be built in two
> parts:  the motivational system, which would not change substantially
> during RSI, and the "thinking part" (for want of a better term), which
> is where you do all the improvement.

For concreteness, I have called these the Utility Function and World Model in 
my writings on the subject...

A plan that says "Let RSI consist of growing the WM and not the UF" suffers 
from the problem that the sophistication of the WM's understanding soon makes 
the UF look crude and stupid. Human babies want food, proximity to their 
mothers, and are frightened of strangers. That's good for babies but a person 
with greater understanding and capabilities is better off (and the rest of us 
are better off if the person has) a more sophisticated UF as well.

> It is not quite a contradiction, but certainly this would be impossible:
>   deciding to make a modification that clearly was going to leave it
> wanting something that, if it wanted that thing today, would contradict
> its current priorities.  Do you see why?  The motivational mechanism IS
> what the system wants, it is not what the system is considering wanting.

This is a good first cut at the problem, and is taken by e.g. Nick Bostrom in 
a widely cited paper at http://www.nickbostrom.com/ethics/ai.html

> The system is not protecting current beliefs, it is believing its
> current beliefs.  Becoming more capable of understanding the "reality"
> it is immersed in?  You have implicitly put a motivational priority in
> your system when you suggest that that is important to it ... does that
> rank higher than its empathy with the human race?
>
> You see where I am going:  there is nothing god-given about the desire
> to "understand reality" in a better way.  That is just one more
> candidate for a motivational priority.

Ah, but consider: knowing more about how the world works is often a valuable 
asset to the attempt to increase the utility of the world, *no matter* what 
else the utility function might specify. 

Thus, a system's self-modification (or evolution in general) is unlikely to 
remove curiosity / thirst for knowledge / desire to improve one's WM as a 
high utility even as it changes other things. 

There are several such properties of a utility function that are likely to be 
invariant under self-improvement or evolution. It is by the use of such 
invariants that we can design self-improving AIs with reasonable assurance of 
their continued beneficence.

--Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore

Matt Mahoney wrote:

--- Hank Conn <[EMAIL PROTECTED]> wrote:

The further the actual target goal state of that particular AI is away from
the actual target goal state of humanity, the worse.

The goal of ... humanity... is that the AGI implemented that will have the
strongest RSI curve also will be such that its actual target goal state is
exactly congruent to the actual target goal state of humanity.


This was discussed on the Singularity list.  Even if we get the motivational
system and goals right, things can still go badly.  Are the following things
good?

- End of disease.
- End of death.
- End of pain and suffering.
- A paradise where all of your needs are met and wishes fulfilled.

You might think so, and program an AGI with these goals.  Suppose the AGI
figures out that by scanning your brain and copying the information into a
computer and making many redundant backups, that you become immortal. 
Furthermore, once your consciousness becomes a computation in silicon, your

universe can be simulated to be anything you want it to be.


See my previous lengthy post on the subject of motivational systems vs 
"goal stack" systems.


The questions you asked above are predicated on a goal stack approach.

You are repeating the same mistakes that I already dealt with.


Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore

Matt Mahoney wrote:

I guess we are arguing terminology.  I mean that the part of the brain which
generates the reward/punishment signal for operant conditioning is not
trainable.  It is programmed only through evolution.


There is no such thing.  This is the kind of psychology that died out at 
least thirty years ago (with the exception of a few diehards in North 
Wales and Cambridge).




Richard Loosemore


[With apologies to Fergus, Nick and Ian, who may someday come across 
this message and start flaming me].


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore

Samantha Atkins wrote:


On Nov 30, 2006, at 12:21 PM, Richard Loosemore wrote:


Recursive Self Inmprovement?

The answer is yes, but with some qualifications.

In general RSI would be useful to the system IF it were done in such a 
way as to preserve its existing motivational priorities.




How could the system anticipate whether on not significant RSI would 
lead it to question or modify its current motivational priorities?  Are 
you suggesting that the system can somehow simulate an improved version 
of itself in sufficient detail to know this?  It seems quite unlikely.


Well, I'm certainly not suggesting the latter.

It's a lot easier than you suppose.  The system would be built in two 
parts:  the motivational system, which would not change substantially 
during RSI, and the "thinking part" (for want of a better term), which 
is where you do all the improvement.


The idea of "questioning or modifying its current motivational 
priorities" is extremely problematic, so be careful how quickly you 
deploy it as if it meant something coherent.  What would it mean for ths 
system to modify it in such a way as to contradict the current state? 
That gets very close to a contradiction in terms.


It is not quite a contradiction, but certainly this would be impossible: 
 deciding to make a modification that clearly was going to leave it 
wanting something that, if it wanted that thing today, would contradict 
its current priorities.  Do you see why?  The motivational mechanism IS 
what the system wants, it is not what the system is considering wanting.




That means:  the system would *not* choose to do any RSI if the RSI 
could not be done in such a way as to preserve its current 
motivational priorities:  to do so would be to risk subverting its own 
most important desires.  (Note carefully that the system itself would 
put this constraint on its own development, it would not have anything 
to do with us controlling it).




If the improvements were an improvement in capabilities and such 
improvement led to changes in its priorities then how would those 
improvements be undesirable due to showing current motivational 
priorities as being in some way lacking?  Why is protecting current 
beliefs or motivational priorities more important than becoming 
presumably more capable and more capable of understanding the reality 
the system is immersed in?


The system is not protecting current beliefs, it is believing its 
current beliefs.  Becoming more capable of understanding the "reality" 
it is immersed in?  You have implicitly put a motivational priority in 
your system when you suggest that that is important to it ... does that 
rank higher than its empathy with the human race?


You see where I am going:  there is nothing god-given about the desire 
to "understand reality" in a better way.  That is just one more 
candidate for a motivational priority.





There is a bit of a problem with the term "RSI" here:  to answer your 
question fully we might have to get more specific about what that 
would entail.


Finally:  the usefulness of RSI would not necessarily be indefinite. 
The system could well get to a situation where further RSI was not 
particularly consistent with its goals.  It could live without it.




Then are its goal  more important to it than reality?

- samantha


Now you have become too abstract for me to answer, unless you are 
repeating the previous point.




Richard Loosemore.















-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-01 Thread J. Storrs Hall, PhD.
On Friday 01 December 2006 20:06, Philip Goetz wrote:

> Thus, I don't think my ability to follow rules written on paper to
> implement a Turing machine proves that the operations powering my
> consciousness are Turing-complete.

Actually, I think it does prove it, since your simulation of a Turing machine 
would consist of conscious operations. On the other hand, I agree with the 
spirit of your argument, (as I understand it), that our ability to simulate 
Turing machines on paper doesn't *prove* that we are generally universal 
machines at the level that we do most of the things that we do.

Even so, I think that we are, just barely. I hope so, anyway, or the AIs will 
put us in zoos and rightly so. 

--Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Matt Mahoney

--- Hank Conn <[EMAIL PROTECTED]> wrote:

> On 12/1/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > The "goals of humanity", like all other species, was determined by
> > evolution.
> > It is to propagate the species.
> 
> 
> That's not the goal of humanity. That's the goal of the evolution of
> humanity, which has been defunct for a while.

We have slowed evolution through medical advances, birth control and genetic
engineering, but I don't think we have stopped it completely yet.

> You are confusing this abstract idea of an optimization target with the
> actual motivation system. You can change your motivation system all you
> want, but you woulnd't (intentionally) change the fundamental specification
> of the optimization target which is maintained by the motivation system as a
> whole.

I guess we are arguing terminology.  I mean that the part of the brain which
generates the reward/punishment signal for operant conditioning is not
trainable.  It is programmed only through evolution.

>   To some extent you can do this.  When rats can
> > electrically stimulate their nucleus accumbens by pressing a lever, they
> > do so
> > nonstop in preference to food and water until they die.
> >
> > I suppose the alternative is to not scan brains, but then you still have
> > death, disease and suffering.  I'm sorry it is not a happy picture either
> > way.
> 
> 
> Or you have no death, disease, or suffering, but not wireheading.

How do you propose to reduce the human mortality rate from 100%?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-01 Thread Philip Goetz

On 12/1/06, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote:

I don't think so. The singulatarians tend to have this mental model of a
superintelligence that is essentially an analogy of the difference between an
animal and a human. My model is different. I think there's a level of
universality, like a Turing machine for computation. The huge difference
between us and animals is that we're universal and they're not, like the
difference between an 8080 and an abacus. "superhuman" intelligence will be
faster but not fundamentally different (in a sense), like the difference
between an 8080 and an Opteron.


I've often heard this claim, but what is the evidence that a human
brain is a universal turing machine?  People say that the fact that
humans can implement a turing machine, by following the instructions
stating how one works, proves that our minds our turing-complete.
BUT, if you reject Searle's Chinese-room argument, you must believe
that the consciousness that exists in the Chinese room is not inside
the human in the room, but in the combination (human + rules + data).
You must then ALSO believe that the Turing-complete consciousness that
is emulating a Turing machine, as a human follows the rules of a
Turing machine, resides not in the human, but in the complete system.
Thus, I don't think my ability to follow rules written on paper to
implement a Turing machine proves that the operations powering my
consciousness are Turing-complete.

It will be awfully embarassing if we build up the philosophical basis
on which our machine descendants justify our extermination when they
find that we're not UTMs...

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-01 Thread J. Storrs Hall, PhD.
I've just finished a book on this subject, (coming out in May from
Prometheus). I also had an extended conversation/argument about it with
some smart people on another mailing list, of which I reproduce the
juicy parts here: (quoted inclusions the other guys, straight text me)


> Michael Vassar has been working very hard to convince us that one kind of
> emergent entity--General Artificial Intelligence--will kill us all,
> automatically, inevitably, by accident, as soon as we try to use it.

Having just finished writing a book on the subject, I have a few observations:

True general AI probably shows up in the next decade, well before personal 
nanofactories. AI is gathering steam, and has 50 years of design-ahead lying 
around waiting to be taken advantage of. Nanotech is also picking up steam 
(and of course I mean Drexlerian nanotech) but more slowly and there is a 
relatively tiny amount of design-ahead going on.

Currently a HEPP (human-equivalent processing power) in the form of an IBM 
Blue Gene costs $30 million. In a decade it will cost the same as a new car; 
hobbyists and small businesses will be able to afford them. However, the AI 
software available will be new, still experimental, and have a lot of 
learning to do. (It'll be AGI because it will be able to do the learning)

Another decade will elapse before the hardware and software combine to produce 
AGIs as productive as a decent-sized company. Another one yet before they are 
competitive with a national economy. In other words, for 30 years (give or 
take) they will have to live and work in a human economy, and will find it 
much more beneficial to work within the economy than try to do an end run. 

Unless somebody does something REALLY STUPID like putting the government in 
charge of them, there will be a huge diverse population of AGIs continuing 
the market dynamic (and forming the environment that keeps each other in 
check) by the time they become unequivocally hyperhuman. By huge and diverse 
I mean billions, and ranging in intelligence from subhuman (Roomba ca 2030) 
on up.

Cooperation and other moral traits have evolved in humans (and other animals) 
because it is more beneficial than the "war of each against all." AIs can 
know this (they'll be able to read, you know) and form "mutual beneficence 
societies" of the kind that developed naturally, but do it intentionally, 
reliably, and fast. Such societies would be more efficient than the rest of 
the competing world of AIs. There's no reason that trustworthy humans might 
not be admitted into such societies as well. 

Cooperation and the Hobbesian War are both stable evolutionary strategies. If 
we are so stupid as to start the AIs out in the Hobbsian one, we deserve 
whatever we get. (And kindly notice that the "Friendly AI" scheme does 
exactly that...)

May you live in interesting times.



Runaway recursive self-improvement


> Moore's Law, underneath, is driven by humans.  Replace human
> intelligence with superhuman intelligence, and the speed of computer
> improvement will change as well.  Thinking Moore's Law will remain
> constant even after AIs are introduced to design new chips is like
> saying that the growth of tool complexity will remain constant even
> after Homo sapiens displaces older homonid species.  Not so.  We are
> playing with fundamentally different stuff.

I don't think so. The singulatarians tend to have this mental model of a 
superintelligence that is essentially an analogy of the difference between an 
animal and a human. My model is different. I think there's a level of 
universality, like a Turing machine for computation. The huge difference 
between us and animals is that we're universal and they're not, like the 
difference between an 8080 and an abacus. "superhuman" intelligence will be 
faster but not fundamentally different (in a sense), like the difference 
between an 8080 and an Opteron.

That said, certainly Moore's law will speed up given fast AI. But having one 
human-equivalent AI is not going to make any more different than having one 
more engineer. Having a thousand-times-human AI won't get you more than 
having 1000 engineers. Only when you can substantially augment the total 
brainpower working on the problem will you begin to see significant effects.

> If modest differences in size, brain structure, and
> self-reprogrammability make the difference between chimps and humans
> capable of advanced technological activity, then fundamental
> differences in these qualities between humans and AIs will lead to a
> much larger gulf, right away.

Actually Neanderthals had brains bigger than ours by 10%, and we blew them off 
the face of the earth. They had virtually no innovation in 100,000 years; we 
went from paleolithic to nanotech in 30,000. I'll bet we were universal and 
they weren't.

Virtually every "advantage" in Elie's list is wrong. The key is to realize 
that that we do all these things, just more slowly than we imagine machin

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Hank Conn


This seems rather circular and ill-defined.

- samantha



Yeah I don't really know what I'm talking about at all.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Hank Conn

On 12/1/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:


--- Hank Conn <[EMAIL PROTECTED]> wrote:
> The further the actual target goal state of that particular AI is away
from
> the actual target goal state of humanity, the worse.
>
> The goal of ... humanity... is that the AGI implemented that will have
the
> strongest RSI curve also will be such that its actual target goal state
is
> exactly congruent to the actual target goal state of humanity.

This was discussed on the Singularity list.  Even if we get the
motivational
system and goals right, things can still go badly.  Are the following
things
good?

- End of disease.
- End of death.
- End of pain and suffering.
- A paradise where all of your needs are met and wishes fulfilled.

You might think so, and program an AGI with these goals.  Suppose the AGI
figures out that by scanning your brain and copying the information into a
computer and making many redundant backups, that you become immortal.
Furthermore, once your consciousness becomes a computation in silicon,
your
universe can be simulated to be anything you want it to be.

The "goals of humanity", like all other species, was determined by
evolution.
It is to propagate the species.



That's not the goal of humanity. That's the goal of the evolution of
humanity, which has been defunct for a while.


 This goal is met by a genetically programmed

individual motivation toward reproduction and a fear of death, at least
until
you are past the age of reproduction and you no longer serve a purpose.
Animals without these goals don't pass on their DNA.

A property of motivational systems is that cannot be altered.  You cannot
turn
off your desire to eat or your fear of pain.  You cannot decide you will
start
liking what you don't like, or vice versa.  You cannot because if you
could,
you would not pass on your DNA.



You are confusing this abstract idea of an optimization target with the
actual motivation system. You can change your motivation system all you
want, but you woulnd't (intentionally) change the fundamental specification
of the optimization target which is maintained by the motivation system as a
whole.


Once your brain is in software, what is to stop you from telling the AGI

(that
you built) to reprogram your motivational system that you built so you are
happy with what you have?



Uh... go for it.


 To some extent you can do this.  When rats can

electrically stimulate their nucleus accumbens by pressing a lever, they
do so
nonstop in preference to food and water until they die.

I suppose the alternative is to not scan brains, but then you still have
death, disease and suffering.  I'm sorry it is not a happy picture either
way.



Or you have no death, disease, or suffering, but not wireheading.


-- Matt Mahoney, [EMAIL PROTECTED]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Matt Mahoney
--- Hank Conn <[EMAIL PROTECTED]> wrote:
> The further the actual target goal state of that particular AI is away from
> the actual target goal state of humanity, the worse.
> 
> The goal of ... humanity... is that the AGI implemented that will have the
> strongest RSI curve also will be such that its actual target goal state is
> exactly congruent to the actual target goal state of humanity.

This was discussed on the Singularity list.  Even if we get the motivational
system and goals right, things can still go badly.  Are the following things
good?

- End of disease.
- End of death.
- End of pain and suffering.
- A paradise where all of your needs are met and wishes fulfilled.

You might think so, and program an AGI with these goals.  Suppose the AGI
figures out that by scanning your brain and copying the information into a
computer and making many redundant backups, that you become immortal. 
Furthermore, once your consciousness becomes a computation in silicon, your
universe can be simulated to be anything you want it to be.

The "goals of humanity", like all other species, was determined by evolution. 
It is to propagate the species.  This goal is met by a genetically programmed
individual motivation toward reproduction and a fear of death, at least until
you are past the age of reproduction and you no longer serve a purpose. 
Animals without these goals don't pass on their DNA.

A property of motivational systems is that cannot be altered.  You cannot turn
off your desire to eat or your fear of pain.  You cannot decide you will start
liking what you don't like, or vice versa.  You cannot because if you could,
you would not pass on your DNA.

Once your brain is in software, what is to stop you from telling the AGI (that
you built) to reprogram your motivational system that you built so you are
happy with what you have?  To some extent you can do this.  When rats can
electrically stimulate their nucleus accumbens by pressing a lever, they do so
nonstop in preference to food and water until they die.

I suppose the alternative is to not scan brains, but then you still have
death, disease and suffering.  I'm sorry it is not a happy picture either way.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Samantha Atkins


On Nov 30, 2006, at 10:15 PM, Hank Conn wrote:


Yes, now the point being that if you have an AGI and you aren't in a  
sufficiently fast RSI loop, there is a good chance that if someone  
else were to launch an AGI with a faster RSI loop, your AGI would  
lose control to the other AGI where the goals of the other AGI  
differed from yours.




Are you sure that "control" would be a high priority of such systems?



What I'm saying is that the outcome of the Singularity is going to  
be exactly the target goal state of the AGI with the strongest RSI  
curve.


The further the actual target goal state of that particular AI is  
away from the actual target goal state of humanity, the worse.




What on earth is "the actual target goal state of humanity"?   AFAIK  
there is no such thing.  For that matter I doubt very much there is or  
can be an unchanging target goal state for any real AGI.




The goal of ... humanity... is that the AGI implemented that will  
have the strongest RSI curve also will be such that its actual  
target goal state is exactly congruent to the actual target goal  
state of humanity.




This seems rather circular and ill-defined.

- samantha


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Samantha Atkins


On Nov 30, 2006, at 12:21 PM, Richard Loosemore wrote:


Recursive Self Inmprovement?

The answer is yes, but with some qualifications.

In general RSI would be useful to the system IF it were done in such  
a way as to preserve its existing motivational priorities.




How could the system anticipate whether on not significant RSI would  
lead it to question or modify its current motivational priorities?   
Are you suggesting that the system can somehow simulate an improved  
version of itself in sufficient detail to know this?  It seems quite  
unlikely.



That means:  the system would *not* choose to do any RSI if the RSI  
could not be done in such a way as to preserve its current  
motivational priorities:  to do so would be to risk subverting its  
own most important desires.  (Note carefully that the system itself  
would put this constraint on its own development, it would not have  
anything to do with us controlling it).




If the improvements were an improvement in capabilities and such  
improvement led to changes in its priorities then how would those  
improvements be undesirable due to showing current motivational  
priorities as being in some way lacking?  Why is protecting current  
beliefs or motivational priorities more important than becoming  
presumably more capable and more capable of understanding the reality  
the system is immersed in?



There is a bit of a problem with the term "RSI" here:  to answer  
your question fully we might have to get more specific about what  
that would entail.


Finally:  the usefulness of RSI would not necessarily be indefinite.  
The system could well get to a situation where further RSI was not  
particularly consistent with its goals.  It could live without it.




Then are its goal  more important to it than reality?

- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore

James Ratcliff wrote:
You could start a smaller AI with a simple hardcoded "desire" or reward 
mechanism to "learn" new things, or to increase the size of its knowledge.


 That would be a simple way to programmaticaly insert it.  That along 
with a seed AI, must be put in there in the beginning.


Remember we are not just throwing it out there with no goals or anything 
in the beginning, or it would learn nothing, and DO nothing atall.


Later this piece may need to be directly modifiable by the code to 
decrease or increase its desire to "explore" or learn new things, 
depending on its other goals.


James


It's difficult to get into all the details (this is a big subject), but 
you do have to remember that what you have done is to say *what* needs 
to be done (no doubt in anybody's mind that it needs a desire to learn!) 
but that the problem under discussion is the difficulty of figuring out 
*how* to do that.


That's where my arguments come in:  I was claiming that the idea of 
motivating an AGI has not been properly thought through by many people, 
who just assume that the system has a stack of goals (top level goal, 
then subgoals that, if acheived in sequence or in parallel, would cause 
top level goal to succeed, then a breakdown of those subgoals into 
sub-subgoals, and so on for maybe hundreds of levels  you probably 
get the idea).  My claim is that this design is too naive.  And that 
minor variations on this design won't necessarily improve it.


The devil, in other words, is in the details.


Richard Loosemore.










*/Philip Goetz <[EMAIL PROTECTED]>/* wrote:

On 11/19/06, Richard Loosemore wrote:

 > The goal-stack AI might very well turn out simply not to be a
workable
 > design at all! I really do mean that: it won't become intelligent
 > enough to be a threat. Specifically, we may find that the kind of
 > system that drives itself using only a goal stack never makes it
up to
 > full human level intelligence because it simply cannot do the kind of
 > general, broad-spectrum learning that a Motivational System AI
would do.
 >
 > Why? Many reasons, but one is that the system could never learn
 > autonomously from a low level of knowledge *because* it is using
goals
 > that are articulated using the system's own knowledge base. Put
simply,
 > when the system is in its child phase it cannot have the goal
"acquire
 > new knowledge" because it cannot understand the meaning of the words
 > "acquire" or "new" or "knowledge"! It isn't due to learn those words
 > until it becomes more mature (develops more mature concepts), so
how can
 > it put "acquire new knowledge" on its goal stack and then unpack that
 > goal into subgoals, etc?

This is an excellent observation that I hadn't heard before -
thanks, Richard!

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php



Everyone is raving about the all-new Yahoo! Mail beta. 



This list is sponsored by AGIRI: http://www.agiri.org/email To 
unsubscribe or change your options, please go to: 
http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore

Hank Conn wrote:
Yes, now the point being that if you have an AGI and you aren't in a 
sufficiently fast RSI loop, there is a good chance that if someone else 
were to launch an AGI with a faster RSI loop, your AGI would lose 
control to the other AGI where the goals of the other AGI differed from 
yours.
 
What I'm saying is that the outcome of the Singularity is going to be 
exactly the target goal state of the AGI with the strongest RSI curve.
 
The further the actual target goal state of that particular AI is away 
from the actual target goal state of humanity, the worse.
 
The goal of ... humanity... is that the AGI implemented that will have 
the strongest RSI curve also will be such that its actual target goal 
state is exactly congruent to the actual target goal state of humanity.
 
This is assuming AGI becomes capable of RSI before any human does. I 
think that's a reasonable assumption (this is the AGI list after all).


I agree with you, as far as you take these various points, although with 
some refinements.


Taking them in reverse order:

1)  There is no doubt in my mind that machine RSI will come long before 
human RSI.


2)  The goal of humanity is to build an AGI with goals (in the most 
general sense of "goals") that matches its own.  That is as it should 
be, and I think there are techniques that could lead to that.  I also 
believe that those techniques will lead to AGI quicker than other 
techniques, which is a very good thing.


3)  The way that the "RSI curves" play out is not clear at this point, 
but my thoughts are that because of the nature of exponential curves 
(flattish for a long time, then the "knee", then off to the sky) we will 
*not* have an arms race situation with competing AGI projects.  An arms 
race can only really happen if the projects stay on closely matched, 
fairly shallow curves:  people need to be neck and neck to have a 
situation in which nobody quite gets the upper hand and everyone 
competes.  That is fundamentally at odds with the exponential shape of 
the RSI curve.


What does that mean in practice?  It means that when the first system 
gets to really fast part of the curve, it might (for example) go from 
human level to 10x human level in a couple of months, then to 100x in a 
month, then 1000x in a week regardless of the exact details of these 
numbers, you can see that such a sudden arrival at superintelligence 
would most likley *not* occur at the same moment as someone else's project.


Then, the first system would quietly move to change any other projects 
so that their motivations were not a threat.  It wouldn't take them out, 
it would just ensure they were safe.


End of worries.

The only thing to worry about is that the first system have sympathetic 
motivations.  I think ensuring that should be our responsibility.  I 
think, also, that the first design will use the kind of diffuse 
motivational system that I talked about before, and for that reason it 
will most likely be similiar in design to ours, and not be violent or 
aggressive.


I actually have stronger beliefs than that, but they are hardly to 
articulate - basically, that a smart enough system will naturally and 
inevitably *tend* toward sympathy for life.  But I am not relying on 
that extra idea for the above arguments.


Does that make sense?


Richard Loosemore








-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-30 Thread Hank Conn

On 11/30/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:



> Hank Conn wrote:
[snip...]
>  > I'm not asserting any specific AI design. And I don't see how
>  > a motivational system based on "large numbers of diffuse
constrains"
>  > inherently prohibits RSI, or really has any relevance to this. "A
>  > motivation system based on large numbers of diffuse constraints"
does
>  > not, by itself, solve the problem- if the particular constraints
> do not
>  > form a congruent mapping to the concerns of humanity, regardless
of
>  > their number or level of diffuseness, then we are likely facing
an
>  > Unfriendly outcome of the Singularity, at some point in the
future.
>
> Richard Loosemore wrote:
> The point I am heading towards, in all of this, is that we need to
> unpack some of these ideas in great detail in order to come to
sensible
> conclusions.
>
> I think the best way would be in a full length paper, although I did
> talk about some of that detail in my recent lengthy post on
> motivational
> systems.
>
> Let me try to bring out just one point, so you can see where I am
going
> when I suggest it needs much more detail.  In the above, you really
are
> asserting one specific AI design, because you talk about the goal
stack
> as if this could be so simple that the programmer would be able to
> insert the "make paperclips" goal and the machine would go right
ahead
> and do that.  That type of AI design is very, very different from
the
> Motivational System AI that I discussed before (the one with the
diffuse
> set of constraints driving it).
>
>
> Here is one of many differences between the two approaches.
>
> The goal-stack AI might very well turn out simply not to be a
workable
> design at all!  I really do mean that:  it won't become intelligent
> enough to be a threat.   Specifically, we may find that the kind of
> system that drives itself using only a goal stack never makes it up
to
> full human level intelligence because it simply cannot do the kind
of
> general, broad-spectrum learning that a Motivational System AI would
do.
>
> Why?  Many reasons, but one is that the system could never learn
> autonomously from a low level of knowledge *because* it is using
goals
> that are articulated using the system's own knowledge base.  Put
simply,
> when the system is in its child phase it cannot have the goal
"acquire
> new knowledge" because it cannot understand the meaning of the words
> "acquire" or "new" or "knowledge"!  It isn't due to learn those
words
> until it becomes more mature (develops more mature concepts), so how
can
> it put "acquire new knowledge" on its goal stack and then unpack
that
> goal into subgoals, etc?
>
>
> Try the same question with any goal that the system might have when
it
> is in its infancy, and you'll see what I mean.  The whole concept of
a
> system driven only by a goal stack with statements that resolve on
its
> knowledge base is that it needs to be already very intelligent
before it
> can use them.
>
>
>
> If your system is intelligent, it has some goal(s) (or "motivation(s)").
> For most really complex goals (or motivations), RSI is an extremely
> useful subgoal (sub-...motivation). This makes no further assumptions
> about the intelligence in question, including those relating to the
> design of the goal (motivation) system.
>
>
> Would you agree?
>
>
> -hank

Recursive Self Inmprovement?

The answer is yes, but with some qualifications.

In general RSI would be useful to the system IF it were done in such a
way as to preserve its existing motivational priorities.

That means:  the system would *not* choose to do any RSI if the RSI
could not be done in such a way as to preserve its current motivational
priorities:  to do so would be to risk subverting its own most important
desires.  (Note carefully that the system itself would put this
constraint on its own development, it would not have anything to do with
us controlling it).

There is a bit of a problem with the term "RSI" here:  to answer your
question fully we might have to get more specific about what that would
entail.

Finally:  the usefulness of RSI would not necessarily be indefinite.
The system could well get to a situation where further RSI was not
particularly consistent with its goals.  It could live without it.


Richard Loosemore



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




Yes, now the point being that if you have an AGI and you aren't in a
sufficiently fast RSI loop, there is a good chance that if someone else were
to launch an AGI with a faster RSI loop, your AGI would lose control to the
other AGI where the goals of the other AGI differed from yours.

What I'm saying is that the outcome of 

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-30 Thread Richard Loosemore



Hank Conn wrote:

[snip...]

 > I'm not asserting any specific AI design. And I don't see how
 > a motivational system based on "large numbers of diffuse constrains"
 > inherently prohibits RSI, or really has any relevance to this. "A
 > motivation system based on large numbers of diffuse constraints" does
 > not, by itself, solve the problem- if the particular constraints
do not
 > form a congruent mapping to the concerns of humanity, regardless of
 > their number or level of diffuseness, then we are likely facing an
 > Unfriendly outcome of the Singularity, at some point in the future.

Richard Loosemore wrote:
The point I am heading towards, in all of this, is that we need to
unpack some of these ideas in great detail in order to come to sensible
conclusions.

I think the best way would be in a full length paper, although I did
talk about some of that detail in my recent lengthy post on
motivational
systems.

Let me try to bring out just one point, so you can see where I am going
when I suggest it needs much more detail.  In the above, you really are
asserting one specific AI design, because you talk about the goal stack
as if this could be so simple that the programmer would be able to
insert the "make paperclips" goal and the machine would go right ahead
and do that.  That type of AI design is very, very different from the
Motivational System AI that I discussed before (the one with the diffuse
set of constraints driving it).


Here is one of many differences between the two approaches.

The goal-stack AI might very well turn out simply not to be a workable
design at all!  I really do mean that:  it won't become intelligent
enough to be a threat.   Specifically, we may find that the kind of
system that drives itself using only a goal stack never makes it up to
full human level intelligence because it simply cannot do the kind of
general, broad-spectrum learning that a Motivational System AI would do.

Why?  Many reasons, but one is that the system could never learn
autonomously from a low level of knowledge *because* it is using goals
that are articulated using the system's own knowledge base.  Put simply,
when the system is in its child phase it cannot have the goal "acquire
new knowledge" because it cannot understand the meaning of the words
"acquire" or "new" or "knowledge"!  It isn't due to learn those words
until it becomes more mature (develops more mature concepts), so how can
it put "acquire new knowledge" on its goal stack and then unpack that
goal into subgoals, etc?


Try the same question with any goal that the system might have when it
is in its infancy, and you'll see what I mean.  The whole concept of a
system driven only by a goal stack with statements that resolve on its
knowledge base is that it needs to be already very intelligent before it
can use them.

 
 
If your system is intelligent, it has some goal(s) (or "motivation(s)"). 
For most really complex goals (or motivations), RSI is an extremely 
useful subgoal (sub-...motivation). This makes no further assumptions 
about the intelligence in question, including those relating to the 
design of the goal (motivation) system.
 
 
Would you agree?
 
 
-hank


Recursive Self Inmprovement?

The answer is yes, but with some qualifications.

In general RSI would be useful to the system IF it were done in such a 
way as to preserve its existing motivational priorities.


That means:  the system would *not* choose to do any RSI if the RSI 
could not be done in such a way as to preserve its current motivational 
priorities:  to do so would be to risk subverting its own most important 
desires.  (Note carefully that the system itself would put this 
constraint on its own development, it would not have anything to do with 
us controlling it).


There is a bit of a problem with the term "RSI" here:  to answer your 
question fully we might have to get more specific about what that would 
entail.


Finally:  the usefulness of RSI would not necessarily be indefinite. 
The system could well get to a situation where further RSI was not 
particularly consistent with its goals.  It could live without it.



Richard Loosemore



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-30 Thread Hank Conn

On 11/19/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:


Hank Conn wrote:
>  > Yes, you are exactly right. The question is which of my
> assumption are
>  > unrealistic?
>
> Well, you could start with the idea that the AI has "... a strong
goal
> that directs its behavior to aggressively take advantage of these
> means...".   It depends what you mean by "goal" (an item on the task
> stack or a motivational drive?  They are different things) and this
> begs
> a question about who the idiot was that designed it so that it
pursue
> this kind of aggressive behavior rather than some other!
>
> A goal is a problem you want to solve in some environment. The "idiot"
> who designed it may program its goal to be, say, making paperclips.
> Then, after some thought and RSI, the AI decides converting the entire
> planet into a computronium in order to figure out how to maximize the
> number of paper clips in the Universe will satisfy this goal quite
> optimally. Anybody could program it with any goal in mind, and RSI
> happens to be a very useful process for accomplishing many complex
goals.
>
> There is *so* much packed into your statement that it is difficult
to go
> into it in detail.
>
> Just to start with, you would need to cross compare the above
statement
> with the account I gave recently of how a system should be built
with a
> motivational system based on large numbers of diffuse
constraints.  Your
> description is one particular, rather dangerous, design for an AI -
it
> is not an inevitable design.
>
>
> I'm not asserting any specific AI design. And I don't see how
> a motivational system based on "large numbers of diffuse constrains"
> inherently prohibits RSI, or really has any relevance to this. "A
> motivation system based on large numbers of diffuse constraints" does
> not, by itself, solve the problem- if the particular constraints do not
> form a congruent mapping to the concerns of humanity, regardless of
> their number or level of diffuseness, then we are likely facing an
> Unfriendly outcome of the Singularity, at some point in the future.

The point I am heading towards, in all of this, is that we need to
unpack some of these ideas in great detail in order to come to sensible
conclusions.

I think the best way would be in a full length paper, although I did
talk about some of that detail in my recent lengthy post on motivational
systems.

Let me try to bring out just one point, so you can see where I am going
when I suggest it needs much more detail.  In the above, you really are
asserting one specific AI design, because you talk about the goal stack
as if this could be so simple that the programmer would be able to
insert the "make paperclips" goal and the machine would go right ahead
and do that.  That type of AI design is very, very different from the
Motivational System AI that I discussed before (the one with the diffuse
set of constraints driving it).



Here is one of many differences between the two approaches.


The goal-stack AI might very well turn out simply not to be a workable
design at all!  I really do mean that:  it won't become intelligent
enough to be a threat.   Specifically, we may find that the kind of
system that drives itself using only a goal stack never makes it up to
full human level intelligence because it simply cannot do the kind of
general, broad-spectrum learning that a Motivational System AI would do.

Why?  Many reasons, but one is that the system could never learn
autonomously from a low level of knowledge *because* it is using goals
that are articulated using the system's own knowledge base.  Put simply,
when the system is in its child phase it cannot have the goal "acquire
new knowledge" because it cannot understand the meaning of the words
"acquire" or "new" or "knowledge"!  It isn't due to learn those words
until it becomes more mature (develops more mature concepts), so how can
it put "acquire new knowledge" on its goal stack and then unpack that
goal into subgoals, etc?



Try the same question with any goal that the system might have when it

is in its infancy, and you'll see what I mean.  The whole concept of a
system driven only by a goal stack with statements that resolve on its
knowledge base is that it needs to be already very intelligent before it
can use them.




If your system is intelligent, it has some goal(s) (or "motivation(s)").
For most really complex goals (or motivations), RSI is an extremely useful
subgoal (sub-...motivation). This makes no further assumptions about the
intelligence in question, including those relating to the design of the goal
(motivation) system.


Would you agree?


-hank


I have never seen this idea discussed by anyone except me, but it is

extremely powerful and potentially a complete showstopper for the kind
of design inherent in the goal stack approach.  I have certainly never
seen anything like a reasonable rebuttal of it:  even if it turns

Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-30 Thread James Ratcliff
Also could both or any of you describe a little bit more the idea or your 
"goal-stacks" and how they should/would function?

James

David Hart <[EMAIL PROTECTED]> wrote: On 11/30/06, Ben Goertzel <[EMAIL 
PROTECTED]> wrote: Richard,

This is certainly true, and is why in Novamente we use a goal stack
only as one aspect of cognitive control...



Ben,

Could you elaborate for the list some of the nuances between [explicit] 
cognitive control and [implicit] cognitive bias, either theoretically or within 
Novamente? 

David


 
-
 This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or 
change your options, please go to: http://v2.listbox.com/member/?list_id=303 


___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php
 
-
Access over 1 million songs - Yahoo! Music Unlimited.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-30 Thread James Ratcliff
You could start a smaller AI with a simple hardcoded "desire" or reward 
mechanism to "learn" new things, or to increase the size of its knowledge.

 That would be a simple way to programmaticaly insert it.  That along with a 
seed AI, must be put in there in the beginning. 

Remember we are not just throwing it out there with no goals or anything in the 
beginning, or it would learn nothing, and DO nothing atall.

Later this piece may need to be directly modifiable by the code to decrease or 
increase its desire to "explore" or learn new things, depending on its other 
goals.

James


Philip Goetz <[EMAIL PROTECTED]> wrote: On 11/19/06, Richard Loosemore  wrote:

> The goal-stack AI might very well turn out simply not to be a workable
> design at all!  I really do mean that:  it won't become intelligent
> enough to be a threat.   Specifically, we may find that the kind of
> system that drives itself using only a goal stack never makes it up to
> full human level intelligence because it simply cannot do the kind of
> general, broad-spectrum learning that a Motivational System AI would do.
>
> Why?  Many reasons, but one is that the system could never learn
> autonomously from a low level of knowledge *because* it is using goals
> that are articulated using the system's own knowledge base.  Put simply,
> when the system is in its child phase it cannot have the goal "acquire
> new knowledge" because it cannot understand the meaning of the words
> "acquire" or "new" or "knowledge"!  It isn't due to learn those words
> until it becomes more mature (develops more mature concepts), so how can
> it put "acquire new knowledge" on its goal stack and then unpack that
> goal into subgoals, etc?

This is an excellent observation that I hadn't heard before - thanks, Richard!

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php
 
-
Everyone is raving about the all-new Yahoo! Mail beta.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-29 Thread David Hart

On 11/30/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:


Richard,

This is certainly true, and is why in Novamente we use a goal stack
only as one aspect of cognitive control...




Ben,

Could you elaborate for the list some of the nuances between [explicit]
cognitive control and [implicit] cognitive bias, either theoretically or
within Novamente?

David

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-29 Thread Ben Goertzel

Richard,

This is certainly true, and is why in Novamente we use a goal stack
only as one aspect of cognitive control...

ben

On 11/29/06, Philip Goetz <[EMAIL PROTECTED]> wrote:

On 11/19/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:

> The goal-stack AI might very well turn out simply not to be a workable
> design at all!  I really do mean that:  it won't become intelligent
> enough to be a threat.   Specifically, we may find that the kind of
> system that drives itself using only a goal stack never makes it up to
> full human level intelligence because it simply cannot do the kind of
> general, broad-spectrum learning that a Motivational System AI would do.
>
> Why?  Many reasons, but one is that the system could never learn
> autonomously from a low level of knowledge *because* it is using goals
> that are articulated using the system's own knowledge base.  Put simply,
> when the system is in its child phase it cannot have the goal "acquire
> new knowledge" because it cannot understand the meaning of the words
> "acquire" or "new" or "knowledge"!  It isn't due to learn those words
> until it becomes more mature (develops more mature concepts), so how can
> it put "acquire new knowledge" on its goal stack and then unpack that
> goal into subgoals, etc?

This is an excellent observation that I hadn't heard before - thanks, Richard!

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-29 Thread Philip Goetz

On 11/19/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:


The goal-stack AI might very well turn out simply not to be a workable
design at all!  I really do mean that:  it won't become intelligent
enough to be a threat.   Specifically, we may find that the kind of
system that drives itself using only a goal stack never makes it up to
full human level intelligence because it simply cannot do the kind of
general, broad-spectrum learning that a Motivational System AI would do.

Why?  Many reasons, but one is that the system could never learn
autonomously from a low level of knowledge *because* it is using goals
that are articulated using the system's own knowledge base.  Put simply,
when the system is in its child phase it cannot have the goal "acquire
new knowledge" because it cannot understand the meaning of the words
"acquire" or "new" or "knowledge"!  It isn't due to learn those words
until it becomes more mature (develops more mature concepts), so how can
it put "acquire new knowledge" on its goal stack and then unpack that
goal into subgoals, etc?


This is an excellent observation that I hadn't heard before - thanks, Richard!

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-19 Thread Richard Loosemore

Hank Conn wrote:

 > Yes, you are exactly right. The question is which of my
assumption are
 > unrealistic?

Well, you could start with the idea that the AI has "... a strong goal
that directs its behavior to aggressively take advantage of these
means...".   It depends what you mean by "goal" (an item on the task
stack or a motivational drive?  They are different things) and this
begs
a question about who the idiot was that designed it so that it pursue
this kind of aggressive behavior rather than some other!

A goal is a problem you want to solve in some environment. The "idiot" 
who designed it may program its goal to be, say, making paperclips. 
Then, after some thought and RSI, the AI decides converting the entire 
planet into a computronium in order to figure out how to maximize the 
number of paper clips in the Universe will satisfy this goal quite 
optimally. Anybody could program it with any goal in mind, and RSI 
happens to be a very useful process for accomplishing many complex goals.


There is *so* much packed into your statement that it is difficult to go
into it in detail.

Just to start with, you would need to cross compare the above statement
with the account I gave recently of how a system should be built with a
motivational system based on large numbers of diffuse constraints.  Your
description is one particular, rather dangerous, design for an AI - it
is not an inevitable design.

 
I'm not asserting any specific AI design. And I don't see how 
a motivational system based on "large numbers of diffuse constrains" 
inherently prohibits RSI, or really has any relevance to this. "A 
motivation system based on large numbers of diffuse constraints" does 
not, by itself, solve the problem- if the particular constraints do not 
form a congruent mapping to the concerns of humanity, regardless of 
their number or level of diffuseness, then we are likely facing an 
Unfriendly outcome of the Singularity, at some point in the future.


The point I am heading towards, in all of this, is that we need to 
unpack some of these ideas in great detail in order to come to sensible 
conclusions.


I think the best way would be in a full length paper, although I did 
talk about some of that detail in my recent lengthy post on motivational 
systems.


Let me try to bring out just one point, so you can see where I am going 
when I suggest it needs much more detail.  In the above, you really are 
asserting one specific AI design, because you talk about the goal stack 
as if this could be so simple that the programmer would be able to 
insert the "make paperclips" goal and the machine would go right ahead 
and do that.  That type of AI design is very, very different from the 
Motivational System AI that I discussed before (the one with the diffuse 
set of constraints driving it).


Here is one of many differences between the two approaches.

The goal-stack AI might very well turn out simply not to be a workable 
design at all!  I really do mean that:  it won't become intelligent 
enough to be a threat.   Specifically, we may find that the kind of 
system that drives itself using only a goal stack never makes it up to 
full human level intelligence because it simply cannot do the kind of 
general, broad-spectrum learning that a Motivational System AI would do.


Why?  Many reasons, but one is that the system could never learn 
autonomously from a low level of knowledge *because* it is using goals 
that are articulated using the system's own knowledge base.  Put simply, 
when the system is in its child phase it cannot have the goal "acquire 
new knowledge" because it cannot understand the meaning of the words 
"acquire" or "new" or "knowledge"!  It isn't due to learn those words 
until it becomes more mature (develops more mature concepts), so how can 
it put "acquire new knowledge" on its goal stack and then unpack that 
goal into subgoals, etc?


Try the same question with any goal that the system might have when it 
is in its infancy, and you'll see what I mean.  The whole concept of a 
system driven only by a goal stack with statements that resolve on its 
knowledge base is that it needs to be already very intelligent before it 
can use them.


I have never seen this idea discussed by anyone except me, but it is 
extremely powerful and potentially a complete showstopper for the kind 
of design inherent in the goal stack approach.  I have certainly never 
seen anything like a reasonable rebuttal of it:  even if it turns out 
not to be as serious as I claim it is, it still needs to be addressed in 
a serious way before anyone can make assertions about what goal stack 
systems can do.


What is the significance of just this one idea?  That all the goal stack 
approaches might be facing a serious a problem if they want to get 
autonomous, powerful learning mechanisms that build themselves from a 
low level.  So what are AI researchers doing about t

Re: [agi] RSI - What is it and how fast?

2006-11-17 Thread Hank Conn

On 11/17/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:


Hank Conn wrote:
> On 11/17/06, *Richard Loosemore* <[EMAIL PROTECTED]
> > wrote:
>
> Hank Conn wrote:
>  > Here are some of my attempts at explaining RSI...
>  >
>  > (1)
>  > As a given instance of intelligence, as defined as an algorithm
> of an
>  > agent capable of achieving complex goals in complex environments,
>  > approaches the theoretical limits of efficiency for this class of
>  > algorithms, intelligence approaches infinity. Since increasing
>  > computational resources available for an algorithm is a complex
> goal in
>  > a complex environment, the more intelligent an instance of
> intelligence
>  > becomes, the more capable it is in increasing the computational
>  > resources for the algorithm, as well as more capable in
> optimizing the
>  > algorithm for maximum efficiency, thus increasing its
> intelligence in a
>  > positive feedback loop.
>  >
>  > (2)
>  > Suppose an instance of a mind has direct access to some means of
> both
>  > improving and expanding both the hardware and software capability
> of its
>  > particular implementation. Suppose also that the goal system of
this
>  > mind elicits a strong goal that directs its behavior to
aggressively
>  > take advantage of these means. Given each increase in capability
> of the
>  > mind's implementation, it could (1) increase the speed at which
its
>  > hardware is upgraded and expanded, (2) More quickly, cleverly,
and
>  > elegantly optimize its existing software base to maximize
capability,
>  > (3) Develop better cognitive tools and functions more quickly and
in
>  > more quantity, and (4) Optimize its implementation on
> successively lower
>  > levels by researching and developing better, smaller, more
advanced
>  > hardware. This would create a positive feedback loop- the more
> capable
>  > its implementation, the more capable it is in improving its
> implementation.
>  >
>  > How fast could RSI plausibly happen? Is RSI inevitable / how soon
> will
>  > it be? How do we truly maximize the benefit to humanity?
>  >
>  > It is my opinion that this could happen extremely quickly once a
>  > completely functional AGI is achieved. I think its plausible it
could
>  > happen against the will of the designers (and go on to pose an
>  > existential risk), and quite likely that it would move along
> quite well
>  > with the designers intention, however, this opens up the door to
>  > existential disasters in the form of so-called Failures of
> Friendliness.
>  > I think its fairly implausible the designers would suppress this
>  > process, except those that are concerned about completely working
out
>  > issues of Friendliness in the AGI design.
>
> Hank,
>
> First, I will say what I always say when faced by arguments that
> involve
> the goals and motivations of an AI:  your argument crucially depends
on
> assumptions about what its motivations would be.  Because you have
made
> extremely simple assumptions about the motivation system, AND
because
> you have chosen assumptions that involve basic unfriendliness, your
> scenario is guaranteed to come out looking like an existential
threat.
>
>






Yes, you are exactly right. The question is which of my assumption are
> unrealistic?

Well, you could start with the idea that the AI has "... a strong goal
that directs its behavior to aggressively take advantage of these
means...".   It depends what you mean by "goal" (an item on the task
stack or a motivational drive?  They are different things) and this begs
a question about who the idiot was that designed it so that it pursue
this kind of aggressive behavior rather than some other!



A goal is a problem you want to solve in some environment. The "idiot" who
designed it may program its goal to be, say, making paperclips. Then, after
some thought and RSI, the AI decides converting the entire planet into a
computronium in order to figure out how to maximize the number of paper
clips in the Universe will satisfy this goal quite optimally. Anybody could
program it with any goal in mind, and RSI happens to be a very useful
process for accomplishing many complex goals.


There is *so* much packed into your statement that it is difficult to go

into it in detail.

Just to start with, you would need to cross compare the above statement
with the account I gave recently of how a system should be built with a
motivational system based on large numbers of diffuse constraints.  Your
description is one particular, rather dangerous, design for an AI - it
is not an inevitable design.



I'm not asserting any specific AI design. And I don't see how a motivational
system based on "large numbers of diffuse constrains" i

Re: [agi] RSI - What is it and how fast?

2006-11-17 Thread Richard Loosemore

Hank Conn wrote:
On 11/17/06, *Richard Loosemore* <[EMAIL PROTECTED] 
> wrote:


Hank Conn wrote:
 > Here are some of my attempts at explaining RSI...
 >
 > (1)
 > As a given instance of intelligence, as defined as an algorithm
of an
 > agent capable of achieving complex goals in complex environments,
 > approaches the theoretical limits of efficiency for this class of
 > algorithms, intelligence approaches infinity. Since increasing
 > computational resources available for an algorithm is a complex
goal in
 > a complex environment, the more intelligent an instance of
intelligence
 > becomes, the more capable it is in increasing the computational
 > resources for the algorithm, as well as more capable in
optimizing the
 > algorithm for maximum efficiency, thus increasing its
intelligence in a
 > positive feedback loop.
 >
 > (2)
 > Suppose an instance of a mind has direct access to some means of
both
 > improving and expanding both the hardware and software capability
of its
 > particular implementation. Suppose also that the goal system of this
 > mind elicits a strong goal that directs its behavior to aggressively
 > take advantage of these means. Given each increase in capability
of the
 > mind's implementation, it could (1) increase the speed at which its
 > hardware is upgraded and expanded, (2) More quickly, cleverly, and
 > elegantly optimize its existing software base to maximize capability,
 > (3) Develop better cognitive tools and functions more quickly and in
 > more quantity, and (4) Optimize its implementation on
successively lower
 > levels by researching and developing better, smaller, more advanced
 > hardware. This would create a positive feedback loop- the more
capable
 > its implementation, the more capable it is in improving its
implementation.
 >
 > How fast could RSI plausibly happen? Is RSI inevitable / how soon
will
 > it be? How do we truly maximize the benefit to humanity?
 >
 > It is my opinion that this could happen extremely quickly once a
 > completely functional AGI is achieved. I think its plausible it could
 > happen against the will of the designers (and go on to pose an
 > existential risk), and quite likely that it would move along
quite well
 > with the designers intention, however, this opens up the door to
 > existential disasters in the form of so-called Failures of
Friendliness.
 > I think its fairly implausible the designers would suppress this
 > process, except those that are concerned about completely working out
 > issues of Friendliness in the AGI design.

Hank,

First, I will say what I always say when faced by arguments that
involve
the goals and motivations of an AI:  your argument crucially depends on
assumptions about what its motivations would be.  Because you have made
extremely simple assumptions about the motivation system, AND because
you have chosen assumptions that involve basic unfriendliness, your
scenario is guaranteed to come out looking like an existential threat.

 
Yes, you are exactly right. The question is which of my assumption are 
unrealistic?


Well, you could start with the idea that the AI has "... a strong goal 
that directs its behavior to aggressively take advantage of these 
means...".   It depends what you mean by "goal" (an item on the task 
stack or a motivational drive?  They are different things) and this begs 
a question about who the idiot was that designed it so that it pursue 
this kind of aggressive behavior rather than some other!


There is *so* much packed into your statement that it is difficult to go 
into it in detail.


Just to start with, you would need to cross compare the above statement 
with the account I gave recently of how a system should be built with a 
motivational system based on large numbers of diffuse constraints.  Your 
description is one particular, rather dangerous, design for an AI - it 
is not an inevitable design.


Also, if you meant to exclude the type of system I described (if you 
meant a system with a goal stack and no motivational system), you might 
well be describing a system design that, in my opinion, would not be 
very dangerous because it would never actually make it to human level 
intelligence.  In that case none of us would have much to be worried about.


Richard Loosemore






Second, your arguments both have the feel of a Zeno's Paradox argument:
they look as though they imply an ever-increasing rapaciousness on the
part of the AI, whereas in fact there are so many assumptions built into
your statement that in practice your arguments could result in *any*
growth scenario, including ones where it plateaus.   It is a little
like
you arguing that every infinite sum involv

Re: [agi] RSI - What is it and how fast?

2006-11-17 Thread Hank Conn

On 11/17/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:


Hank Conn wrote:
> Here are some of my attempts at explaining RSI...
>
> (1)
> As a given instance of intelligence, as defined as an algorithm of an
> agent capable of achieving complex goals in complex environments,
> approaches the theoretical limits of efficiency for this class of
> algorithms, intelligence approaches infinity. Since increasing
> computational resources available for an algorithm is a complex goal in
> a complex environment, the more intelligent an instance of intelligence
> becomes, the more capable it is in increasing the computational
> resources for the algorithm, as well as more capable in optimizing the
> algorithm for maximum efficiency, thus increasing its intelligence in a
> positive feedback loop.
>
> (2)
> Suppose an instance of a mind has direct access to some means of both
> improving and expanding both the hardware and software capability of its
> particular implementation. Suppose also that the goal system of this
> mind elicits a strong goal that directs its behavior to aggressively
> take advantage of these means. Given each increase in capability of the
> mind's implementation, it could (1) increase the speed at which its
> hardware is upgraded and expanded, (2) More quickly, cleverly, and
> elegantly optimize its existing software base to maximize capability,
> (3) Develop better cognitive tools and functions more quickly and in
> more quantity, and (4) Optimize its implementation on successively lower
> levels by researching and developing better, smaller, more advanced
> hardware. This would create a positive feedback loop- the more capable
> its implementation, the more capable it is in improving its
implementation.
>
> How fast could RSI plausibly happen? Is RSI inevitable / how soon will
> it be? How do we truly maximize the benefit to humanity?
>
> It is my opinion that this could happen extremely quickly once a
> completely functional AGI is achieved. I think its plausible it could
> happen against the will of the designers (and go on to pose an
> existential risk), and quite likely that it would move along quite well
> with the designers intention, however, this opens up the door to
> existential disasters in the form of so-called Failures of Friendliness.
> I think its fairly implausible the designers would suppress this
> process, except those that are concerned about completely working out
> issues of Friendliness in the AGI design.

Hank,

First, I will say what I always say when faced by arguments that involve
the goals and motivations of an AI:  your argument crucially depends on
assumptions about what its motivations would be.  Because you have made
extremely simple assumptions about the motivation system, AND because
you have chosen assumptions that involve basic unfriendliness, your
scenario is guaranteed to come out looking like an existential threat.



Yes, you are exactly right. The question is which of my assumption are
unrealistic?


Second, your arguments both have the feel of a Zeno's Paradox argument:

they look as though they imply an ever-increasing rapaciousness on the
part of the AI, whereas in fact there are so many assumptions built into
your statement that in practice your arguments could result in *any*
growth scenario, including ones where it plateaus.   It is a little like
you arguing that every infinite sum involves adding stuff together, so
every infinite sum must go off to infinity... a spurious argument, of
course, because they can go in any direction.



Of course any scenario is possible post-Singularity, including ones we can't
even imagine. Building an AI in such a way that you are capable of proving
causal or probabilistic bounds of its behavior through recursive
self-improvement is the way to be sure of a Friendly outcome.


Richard Loosemore




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-11-17 Thread Richard Loosemore

Hank Conn wrote:

Here are some of my attempts at explaining RSI...
 
(1)
As a given instance of intelligence, as defined as an algorithm of an 
agent capable of achieving complex goals in complex environments, 
approaches the theoretical limits of efficiency for this class of 
algorithms, intelligence approaches infinity. Since increasing 
computational resources available for an algorithm is a complex goal in 
a complex environment, the more intelligent an instance of intelligence 
becomes, the more capable it is in increasing the computational 
resources for the algorithm, as well as more capable in optimizing the 
algorithm for maximum efficiency, thus increasing its intelligence in a 
positive feedback loop.
 
(2)
Suppose an instance of a mind has direct access to some means of both 
improving and expanding both the hardware and software capability of its 
particular implementation. Suppose also that the goal system of this 
mind elicits a strong goal that directs its behavior to aggressively 
take advantage of these means. Given each increase in capability of the 
mind's implementation, it could (1) increase the speed at which its 
hardware is upgraded and expanded, (2) More quickly, cleverly, and 
elegantly optimize its existing software base to maximize capability, 
(3) Develop better cognitive tools and functions more quickly and in 
more quantity, and (4) Optimize its implementation on successively lower 
levels by researching and developing better, smaller, more advanced 
hardware. This would create a positive feedback loop- the more capable 
its implementation, the more capable it is in improving its implementation.
 
How fast could RSI plausibly happen? Is RSI inevitable / how soon will 
it be? How do we truly maximize the benefit to humanity?
 
It is my opinion that this could happen extremely quickly once a 
completely functional AGI is achieved. I think its plausible it could 
happen against the will of the designers (and go on to pose an 
existential risk), and quite likely that it would move along quite well 
with the designers intention, however, this opens up the door to 
existential disasters in the form of so-called Failures of Friendliness. 
I think its fairly implausible the designers would suppress this 
process, except those that are concerned about completely working out 
issues of Friendliness in the AGI design.


Hank,

First, I will say what I always say when faced by arguments that involve 
the goals and motivations of an AI:  your argument crucially depends on 
assumptions about what its motivations would be.  Because you have made 
extremely simple assumptions about the motivation system, AND because 
you have chosen assumptions that involve basic unfriendliness, your 
scenario is guaranteed to come out looking like an existential threat.


Second, your arguments both have the feel of a Zeno's Paradox argument: 
 they look as though they imply an ever-increasing rapaciousness on the 
part of the AI, whereas in fact there are so many assumptions built into 
your statement that in practice your arguments could result in *any* 
growth scenario, including ones where it plateaus.   It is a little like 
you arguing that every infinite sum involves adding stuff together, so 
every infinite sum must go off to infinity... a spurious argument, of 
course, because they can go in any direction.


Richard Loosemore



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-11-16 Thread Hank Conn

On 11/16/06, Russell Wallace <[EMAIL PROTECTED]> wrote:


On 11/16/06, Hank Conn <[EMAIL PROTECTED]> wrote:

>  How fast could RSI plausibly happen? Is RSI inevitable / how soon will
> it be? How do we truly maximize the benefit to humanity?
>

The concept is unfortunately based on a category error: intelligence (in
the operational sense of ability to get things done) is not a mathematical
property of a program, but an empirical property of the program plus the
real world.



I'm simply defining it as the efficiency in accomplishing complex goals in
complex environments.


 There is no algorithm that will compute whether a putative improvement is

actually an improvement.




I don't know whether that is true or not, but it is obvious, in many cases,
that some putative improvement will actually improve things, and
certainly possible to approximate to varying levels of correctness.


So the answers to your questions are: (1, 2) given that it's the cognitive

equivalent of a perpetual motion machine,




How?


 don't hold your breath, (3) by moving on to ideas that, while lacking the

free lunch appeal of RSI, have a chance of being able to work in real life.


--

This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe
or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-11-16 Thread Russell Wallace

On 11/16/06, Hank Conn <[EMAIL PROTECTED]> wrote:


How fast could RSI plausibly happen? Is RSI inevitable / how soon will it
be? How do we truly maximize the benefit to humanity?



The concept is unfortunately based on a category error: intelligence (in the
operational sense of ability to get things done) is not a mathematical
property of a program, but an empirical property of the program plus the
real world. There is no algorithm that will compute whether a putative
improvement is actually an improvement.

So the answers to your questions are: (1, 2) given that it's the cognitive
equivalent of a perpetual motion machine, don't hold your breath, (3) by
moving on to ideas that, while lacking the free lunch appeal of RSI, have a
chance of being able to work in real life.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-11-16 Thread Matt Mahoney
I think this is a topic for the singularity list, but I agree it could happen 
very quickly.  Right now there is more than enough computing power on the 
Internet to support superhuman AGI.  One possibility is that it could take the 
form of a worm.

http://en.wikipedia.org/wiki/SQL_slammer_(computer_worm)

An AGI of this type would be far more dangerous because it could analyze code, 
discover large numbers of vulnerabilities and exploit them all at once.  As the 
Internet gets bigger, faster, and more complex, the risk increases.
 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Hank Conn <[EMAIL PROTECTED]>
To: agi 
Sent: Thursday, November 16, 2006 3:33:08 PM
Subject: [agi] RSI - What is it and how fast?

Here are some of my attempts at explaining RSI...

 

(1)

As a given instance of intelligence, as defined as an algorithm of an agent 
capable of achieving complex goals in complex environments, approaches the 
theoretical limits of efficiency for this class of algorithms, intelligence 
approaches infinity. Since increasing computational resources available for an 
algorithm is a complex goal in a complex environment, the more intelligent an 
instance of intelligence becomes, the more capable it is in increasing the 
computational resources for the algorithm, as well as more capable in 
optimizing the algorithm for maximum efficiency, thus increasing its 
intelligence in a positive feedback loop.


 

(2)

Suppose an instance of a mind has direct access to some means of both improving 
and expanding both the hardware and software capability of its particular 
implementation. Suppose also that the goal system of this mind elicits a strong 
goal that directs its behavior to aggressively take advantage of these means. 
Given each increase in capability of the mind's implementation, it could (1) 
increase the speed at which its hardware is upgraded and expanded, (2) More 
quickly, cleverly, and elegantly optimize its existing software base to 
maximize capability, (3) Develop better cognitive tools and functions more 
quickly and in more quantity, and (4) Optimize its implementation on 
successively lower levels by researching and developing better, smaller, more 
advanced hardware. This would create a positive feedback loop- the more capable 
its implementation, the more capable it is in improving its implementation.


 


How fast could RSI plausibly happen? Is RSI inevitable / how soon will it be? 
How do we truly maximize the benefit to humanity?


 

It is my opinion that this could happen extremely quickly once a completely 
functional AGI is achieved. I think its plausible it could happen against the 
will of the designers (and go on to pose an existential risk), and quite likely 
that it would move along quite well with the designers intention, however, this 
opens up the door to existential disasters in the form of so-called Failures of 
Friendliness. I think its fairly implausible the designers would suppress this 
process, except those that are concerned about completely working out issues of 
Friendliness in the AGI design.



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] RSI - What is it and how fast?

2006-11-16 Thread Hank Conn

Here are some of my attempts at explaining RSI...

(1)
As a given instance of intelligence, as defined as an algorithm of an agent
capable of achieving complex goals in complex environments, approaches the
theoretical limits of efficiency for this class of algorithms, intelligence
approaches infinity. Since increasing computational resources available for
an algorithm is a complex goal in a complex environment, the more
intelligent an instance of intelligence becomes, the more capable it is in
increasing the computational resources for the algorithm, as well as more
capable in optimizing the algorithm for maximum efficiency, thus increasing
its intelligence in a positive feedback loop.

(2)
Suppose an instance of a mind has direct access to some means of both
improving and expanding both the hardware and software capability of its
particular implementation. Suppose also that the goal system of this mind
elicits a strong goal that directs its behavior to aggressively take
advantage of these means. Given each increase in capability of the mind's
implementation, it could (1) increase the speed at which its hardware is
upgraded and expanded, (2) More quickly, cleverly, and elegantly optimize
its existing software base to maximize capability, (3) Develop better
cognitive tools and functions more quickly and in more quantity, and (4)
Optimize its implementation on successively lower levels by researching and
developing better, smaller, more advanced hardware. This would create a
positive feedback loop- the more capable its implementation, the more
capable it is in improving its implementation.

How fast could RSI plausibly happen? Is RSI inevitable / how soon will it
be? How do we truly maximize the benefit to humanity?

It is my opinion that this could happen extremely quickly once a completely
functional AGI is achieved. I think its plausible it could happen against
the will of the designers (and go on to pose an existential risk), and quite
likely that it would move along quite well with the designers intention,
however, this opens up the door to existential disasters in the form of
so-called Failures of Friendliness. I think its fairly implausible the
designers would suppress this process, except those that are concerned about
completely working out issues of Friendliness in the AGI design.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303