RE: [agi] IBM, Los Alamos scientists claim fastest computer

2008-06-13 Thread Ed Porter
Matt,

Thank you for your reply.  For me it is very thought provoking.

-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED] 
Sent: Thursday, June 12, 2008 7:23 PM
To: agi@v2.listbox.com
Subject: RE: [agi] IBM, Los Alamos scientists claim fastest computer

--- On Thu, 6/12/08, Ed Porter [EMAIL PROTECTED] wrote:

 I think processor to memory, and inter processor
 communications are currently far short

Your concern is over the added cost of implementing a sparsely connected
network, which slows memory access and requires more memory for
representation (e.g. pointers in addition to a weight matrix). We can
alleviate much of the problem by using connection locality.
[Ed Porter] this would certainly be true if it worked


The brain has about 10^11 neurons with 10^4 synapses per neuron. If we
divide this work among 10^6 processors, each representing 1 mm^3 of brain
tissue, then each processor must implement 10^5 neurons and 10^9 synapses.
By my earlier argument, there can be at most 10^6 external connection
assuming 1-2 micron nerve fiber diameter, 

[Ed Porter] -- Why couldn't each of the 10^6 fibers have multiple
connections along its length within the cm^3 (although it could be
represented as one row in the matrix, with individual connections
represented as elements in such a row)


so half of the connections must be local. This is true at any scale because
when you double the size of a cube, you increase the number of neurons by 8
but increase the number of external connections by 4. Thus, for any size
cube, half of the external connections are to neighboring cubes and half are
to more distant cubes.

[Ed Porter] -- I am getting lost here.  Why are half the connections local.
You implied there are 10^6 external connections in the cm^3, and 10^9
synapses, which are the connections.  Thus the 10^6 external connections you
mention are only 1/1000 of 10^9 total connections you mention in the cm^3,
not one half as you say.  I understand that there are likely to be as many
connections leaving the cube as going into it, which is related, but not
that same thing as saying half the connection in the cm^3 are external. 

[Ed Porter] -- It is true that the rate of change for each doubling in scale
of the ratio surface to volume remains 1/2, but the actual ratio of surface
to volume decreases by 1/2 at each such doubling of scale, meaning the ratio
actually DOES CHANGE with scaling, rather remaining constant, as indicated
above.

A 1 mm^3 cube can be implemented as a fully connected 10^5 by 10^5 matrix of
10^10 connections. This could be implemented as a 1.25 GB array of bits with
5% of bits set to 1 representing a connection. 

[Ed Porter] a synaps would have multiple weights, such as short term and
long term weights and they would each be more than one bit.  Plus some are
excitatory and others or inhibitory, so they would have differing signs. So
multiple bits, probably at least two bytes would be necessary per element in
the matrix.

[Ed Porter] -- Also you haven't explained how you efficiently do the
activation between cubes (I assume it would be by having a row for each
neuron that projects axons into the cube, and a column for each neuron that
projects a dendrite into it).  This could still be represented by the
matrix, but it would tend to increase its sparseness.

[Ed Porter] -- Learning changes in which dendrites and axons projected into
a cube would require changing the matrix, which is doable, but can make
things more complicated. Another issue is how many other cubes would each
cm^3 communicate with. Are we talking 10, 100, 10^3, 10^4, 10^5, or 10^6.
The number could have a significant impact on communication costs.

[Ed Porter] -- I don't think this system would be good for my current model
for AGI representation, which is based on a type of graph matching, rather
than just a simple summing of synaptic inputs.

The internal computation bottleneck is the vector product which would be
implemented using 128 bit AND instructions in SSE2 at full serial memory
bandwidth. External communication is at most one bit per connected neuron
every cycle (20-100 ms), because the connectivity graph does not change
rapidly. A randomly connected sparse network could be described compactly
using hash functions.

[Ed Porter] -- It is interesting to think that this actually could be used
to speed up the processing of simple neural models.  I understand how the
row values associated with the axon synapses of a given neuron could be read
rapidly in a serial manner.  And how run-length encoding, or some other
means could be used to more compactly represent a sparse matrix.  I also
understand how the contributions for to the activation for each of the 10^5
columns made by each row could be stored in L2 cache at a rate of about
100mHz. 

[Ed Porter] -- L2 Cache write commonly take about 10 to 20clock cycles. ---
perhaps you could write them into memory blocks in L1 cache, which might
only take about two 

Re: [agi] Nirvana

2008-06-13 Thread J Storrs Hall, PhD
There've been enough responses to this that I will reply in generalities, and 
hope I cover everything important...

When I described Nirvana attractractors as a problem for AGI, I meant that in 
the sense that they form a substantial challenge for the designer (as do many 
other features/capabilities of AGI!), not that it was an insoluble problem.

The hierarchical fixed utility function is probably pretty good -- not only 
does it match humans (a la Maslow) but Asimov's Three Laws. And it can be 
more subtle than it originally appears: 

Consider a 3-Laws robot that refuses to cut a human with a knife because that 
would harm her. It would be unable to become a surgeon, for example. But the 
First Law has a clause, or through inaction allow a human to come to harm, 
which means that the robot cannot obey by doing nothing -- it must weigh the 
consequences of all its possible courses of action. 

Now note that it hasn't changed its utility function -- it always believed 
that, say, appendicitis is worse than an incision -- but what can happen is 
that its world model gets better and it *looks like* it's changed its utility 
function because it now knows that operations can cure appendicitis.

Now it seems reasonable that this is a lot of what happens with people, too. 
And you can get a lot of mileage out of expressing the utility function in 
very abstract terms, e.g. life-threatening disease so that no utility 
function update is necessary when you learn about a new disease.

The problem is that the more abstract you make the concepts, the more the 
process of learning an ontology looks like ... revising your utility 
function!  Enlightenment, after all, is a Good Thing, so anything that leads 
to it, nirvana for example, must be good as well. 

So I'm going to broaden my thesis and say that the nirvana attractors lie in 
the path of *any* AI with unbounded learning ability that creates new 
abstractions on top of the things it already knows.

How to avoid them? I think one very useful technique is to start with the kind 
of knowledge and introspection capability to let the AI know when it faces 
one, and recognize that any apparent utility therein is fallacious. 

Of course, none of this matters till we have systems that are capable of 
unbounded self-improvement and abstraction-forming, anyway.

Josh


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-13 Thread Mark Waser

Most people are about as happy as they make up their minds to be.
-- Abraham Lincoln

In our society, after a certain point where we've taken care of our 
immediate needs, arguably we humans are and should be subject to the Nirvana 
effect.


Deciding that you can settle for something (if your subconscious truly can 
handle it) definitely makes you more happy than not.


If, like a machine, you had complete control over your subconscious/utility 
functions, you *could* Nirvana yourself by happily accepting anything.


This is why pleasure and lack of pain suck as goals.  They are not goals, 
they are status indicators.  If you accept them as goals, nirvana is clearly 
the fastest, cleanest, and most effective way to fulfill them.


Why is this surprising or anything to debate about?




- Original Message - 
From: J Storrs Hall, PhD [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, June 13, 2008 11:58 AM
Subject: Re: [agi] Nirvana


There've been enough responses to this that I will reply in generalities, 
and

hope I cover everything important...

When I described Nirvana attractractors as a problem for AGI, I meant that 
in
the sense that they form a substantial challenge for the designer (as do 
many
other features/capabilities of AGI!), not that it was an insoluble 
problem.


The hierarchical fixed utility function is probably pretty good -- not 
only

does it match humans (a la Maslow) but Asimov's Three Laws. And it can be
more subtle than it originally appears:

Consider a 3-Laws robot that refuses to cut a human with a knife because 
that
would harm her. It would be unable to become a surgeon, for example. But 
the
First Law has a clause, or through inaction allow a human to come to 
harm,
which means that the robot cannot obey by doing nothing -- it must weigh 
the

consequences of all its possible courses of action.

Now note that it hasn't changed its utility function -- it always believed
that, say, appendicitis is worse than an incision -- but what can happen 
is
that its world model gets better and it *looks like* it's changed its 
utility

function because it now knows that operations can cure appendicitis.

Now it seems reasonable that this is a lot of what happens with people, 
too.

And you can get a lot of mileage out of expressing the utility function in
very abstract terms, e.g. life-threatening disease so that no utility
function update is necessary when you learn about a new disease.

The problem is that the more abstract you make the concepts, the more the
process of learning an ontology looks like ... revising your utility
function!  Enlightenment, after all, is a Good Thing, so anything that 
leads

to it, nirvana for example, must be good as well.

So I'm going to broaden my thesis and say that the nirvana attractors lie 
in

the path of *any* AI with unbounded learning ability that creates new
abstractions on top of the things it already knows.

How to avoid them? I think one very useful technique is to start with the 
kind

of knowledge and introspection capability to let the AI know when it faces
one, and recognize that any apparent utility therein is fallacious.

Of course, none of this matters till we have systems that are capable of
unbounded self-improvement and abstraction-forming, anyway.

Josh


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-13 Thread J Storrs Hall, PhD
In my visualization of the Cosmic All, it is not surprising.

However, there is an undercurrent of the Singularity/AGI community that is 
somewhat apocaliptic in tone, and which (to my mind) seems to imply or assume 
that somebody will discover a Good Trick for self-improving AIs and the jig 
will be up with the very first one. 

I happen to think it'll be a lot more like the Industrial Revolution -- it'll 
take a lot of work by a lot of people, but revolutionary in its implications 
for the human condition even so.

I'm just trying to point out where I think some of the work will have to go.

I think that our culture of self-indulgence is to some extent in a Nirvana 
attractor. If you think that's a good thing, why shouldn't we all lie around 
with  wires in our pleasure centers (or hopped up on cocaine, same 
difference) with nutrient drips?

I'm working on AGI because I want to build a machine that can solve problems I 
can't do alone. The really important problems are not driving cars, or 
managing companies, or even curing cancer, although building machines that 
can do these things will be of great benefit. The hard problems are moral 
ones, how to live in increasingly complex societies without killing each 
other, and so forth. That's why it matters that an AGI be morally 
self-improving as well as intellectually.

pax vobiscum,

Josh


On Friday 13 June 2008 12:29:33 pm, Mark Waser wrote:
 Most people are about as happy as they make up their minds to be.
 -- Abraham Lincoln
 
 In our society, after a certain point where we've taken care of our 
 immediate needs, arguably we humans are and should be subject to the Nirvana 
 effect.
 
 Deciding that you can settle for something (if your subconscious truly can 
 handle it) definitely makes you more happy than not.
 
 If, like a machine, you had complete control over your subconscious/utility 
 functions, you *could* Nirvana yourself by happily accepting anything.
 
 This is why pleasure and lack of pain suck as goals.  They are not goals, 
 they are status indicators.  If you accept them as goals, nirvana is clearly 
 the fastest, cleanest, and most effective way to fulfill them.
 
 Why is this surprising or anything to debate about?
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


RE: [agi] IBM, Los Alamos scientists claim fastest computer

2008-06-13 Thread Matt Mahoney
--- On Fri, 6/13/08, Ed Porter [EMAIL PROTECTED] wrote:
 [Ed Porter] -- Why couldn't each of the 10^6 fibers
 have multiple connections along its length within the cm^3 (although it
 could be represented as one row in the matrix, with individual
 connections represented as elements in such a row)

I think you mean 10^6 fibers in 1 cubic millimeter. They would have multiple 
connections, but I am only counting interprocessor communication, which is 1 
bit to transmit the state of the neuron (on or off) or a few bits to transmit 
its activation level to neighboring processors.

With regard to representing different types of synapses (various time delays, 
strength bounds, learning rates, etc), this information can be recorded as 
characteristics of the input and output neurons and derived as needed to save 
space.

Minimizing inter-processor communication is a harder problem. This can be done 
by mapping the neural network into a hierarchical organization so that groups 
of co-located neurons are forced to communicate with other groups through 
narrow channels using a small number of neurons. We know that many problems can 
be solve this way. For example, a semantic language model made of a 20K by 20K 
word association matrix can be represented using singular value decomposition 
as a 3 layer neural network with about 100 to 200 hidden neurons [1,2]. The two 
weight matrices could then be implemented on separate processors which 
communicate through the hidden layer neurons. More generally, we know from 
chaos theory that complex systems must limit the number of interconnections to 
be stable [3], which suggests that many AI problems in general can be 
decomposed this way.

Remember we need not model the human brain in precise detail, since our goal is 
to solve AGI by any means. We are allowed to use more efficient algorithms if 
we discover them.

I ran some benchmarks on my PC (2.2 GHz Athlon-64, 3500+). It copies large 
arrays at 1 GB per second using MMX or SSE2, which is not quite fast enough for 
a 10^5 by 10^5 neural network simulation.

1. Bellegarda, Jerome R., John W. Butzberger, Yen-Lu Chow, Noah B. Coccaro, 
Devang Naik (1996), “A novel word clustering algorithm based on latent semantic 
analysis”, Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing, 
vol. 1, 172-175.

2. Gorrell, Genevieve (2006), “Generalized Hebbian Algorithm for Incremental 
Singular Value Decomposition in Natural Language Processing”, Proceedings of 
EACL 2006, Trento, Italy.
http://www.aclweb.org/anthology-new/E/E06/E06-1013.pdf

3. Kauffman, Stuart A. (1991), “Antichaos and Adaptation”, Scientific American, 
Aug. 1991, p. 64.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-13 Thread Mark Waser

I think that our culture of self-indulgence is to some extent in a Nirvana
attractor. If you think that's a good thing, why shouldn't we


No, I think it's a bad thing.  That's why I said  This is why pleasure 
and lack of pain suck as goals. 



However, there is an undercurrent of the Singularity/AGI community that is
somewhat apocaliptic in tone,


Yeah, well, I would (and will, shortly) argue differently.


- Original Message - 
From: J Storrs Hall, PhD [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, June 13, 2008 1:28 PM
Subject: Re: [agi] Nirvana



In my visualization of the Cosmic All, it is not surprising.

However, there is an undercurrent of the Singularity/AGI community that is
somewhat apocaliptic in tone, and which (to my mind) seems to imply or 
assume
that somebody will discover a Good Trick for self-improving AIs and the 
jig

will be up with the very first one.

I happen to think it'll be a lot more like the Industrial Revolution --  
it'll
take a lot of work by a lot of people, but revolutionary in its 
implications

for the human condition even so.

I'm just trying to point out where I think some of the work will have to 
go.


I think that our culture of self-indulgence is to some extent in a Nirvana
attractor. If you think that's a good thing, why shouldn't we all lie 
around

with  wires in our pleasure centers (or hopped up on cocaine, same
difference) with nutrient drips?

I'm working on AGI because I want to build a machine that can solve 
problems I

can't do alone. The really important problems are not driving cars, or
managing companies, or even curing cancer, although building machines that
can do these things will be of great benefit. The hard problems are moral
ones, how to live in increasingly complex societies without killing each
other, and so forth. That's why it matters that an AGI be morally
self-improving as well as intellectually.

pax vobiscum,

Josh


On Friday 13 June 2008 12:29:33 pm, Mark Waser wrote:

Most people are about as happy as they make up their minds to be.
-- Abraham Lincoln

In our society, after a certain point where we've taken care of our
immediate needs, arguably we humans are and should be subject to the 
Nirvana

effect.

Deciding that you can settle for something (if your subconscious truly 
can

handle it) definitely makes you more happy than not.

If, like a machine, you had complete control over your 
subconscious/utility

functions, you *could* Nirvana yourself by happily accepting anything.

This is why pleasure and lack of pain suck as goals.  They are not goals,
they are status indicators.  If you accept them as goals, nirvana is 
clearly

the fastest, cleanest, and most effective way to fulfill them.

Why is this surprising or anything to debate about?




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-13 Thread Jiri Jelinek
On Fri, Jun 13, 2008 at 1:28 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 I think that our culture of self-indulgence is to some extent in a Nirvana
 attractor. If you think that's a good thing, why shouldn't we all lie around
 with  wires in our pleasure centers (or hopped up on cocaine, same
 difference) with nutrient drips?

Because it's unsafe for now.
We will eventually work it out.

Jiri


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-13 Thread Jiri Jelinek
Mark,

Assuming that
a) pain avoidance and pleasure seeking are our primary driving forces; and
b) our intelligence wins over our stupidity; and
c) we don't get killed by something we cannot control;
Nirvana is where we go.

Jiri


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-13 Thread Mark Waser
Yes, but I strongly disagree with assumption one.  Pain avoidance and 
pleasure are best viewed as status indicators, not goals.


- Original Message - 
From: Jiri Jelinek [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, June 13, 2008 3:42 PM
Subject: Re: [agi] Nirvana



Mark,

Assuming that
a) pain avoidance and pleasure seeking are our primary driving forces; and
b) our intelligence wins over our stupidity; and
c) we don't get killed by something we cannot control;
Nirvana is where we go.

Jiri


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] The Logic of Nirvana

2008-06-13 Thread Jiri Jelinek
a future AGI will probably believe one thing, but act as if it believes 
something quite different, for very logical reasons.

I wish/hope we can avoid that. AGI should IMO follow scientific
principles. If its honesty hurts then our society/environment should
be targeted for a change.

Regards,
Jiri Jelinek


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] The Logic of Nirvana

2008-06-13 Thread J Storrs Hall, PhD
On Friday 13 June 2008 02:42:10 pm, Steve Richfield wrote:
 Buddhism teaches that happiness comes from within, so stop twisting the
 world around to make yourself happy, because this can't succeed. However, it
 also teaches that all life is sacred, so pay attention to staying healthy.
 In short, attend to the real necessities and don't sweat the other stuff.

A better example of goal abstraction I couldn't have made up myself.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-13 Thread Jiri Jelinek
 a) pain avoidance and pleasure seeking are our primary driving forces;
On Fri, Jun 13, 2008 at 3:47 PM, Mark Waser [EMAIL PROTECTED] wrote:
 Yes, but I strongly disagree with assumption one.  Pain avoidance and
 pleasure are best viewed as status indicators, not goals.

Pain and pleasure [levels] might be indicators (or primary action
triggers), but I think it's ok to call pain avoidance and pleasure
seeking  our driving forces. I cannot think of any intentional
human activity which is not somehow associated with those primary
triggers/driving forces and that's why I believe the assumption one
is valid.

Best,
Jiri


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] The Logic of Nirvana

2008-06-13 Thread Jiri Jelinek
 Buddhism teaches that happiness comes from within, so stop twisting the
 world around to make yourself happy, because this can't succeed.

Which is of course false... It might come within but triggers can be
internal as well as external and both work pretty well. For the world
twisting, it's just a matter of having enough power = works well for
fewer.

Jiri

Religion is all bunk. [Thomas A. Edison]


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-13 Thread Mark Waser

Your belief value is irrelevant to reality.

Of course all human activity is associated with pain and pleasure because 
evolution gave us pleasure and pain to motivate us to do smart things (as 
far as evolution is concerned) and avoid stupid things (and yes, I am 
anthropomorphizing evolution for ease of communication but if you can't 
figure out what I really mean . . . . ).


However, correlation is not equivalent to causation.

Goal is survival or propagation of species.  Evolution rewards or punishes 
according to these goals.  If you ignore these goals and reprogram your 
pleasure and pain you go extinct.


More clearly, if you wire-head, you go extinct (i.e. you are an evolutionary 
loser).


Go ahead and wirehead if you wish but don't be surprised if someone with the 
same values decides that he is allowed to kill you painlessly since you're 
eating up their resources to promote their own pleasure.


But then again, it really doesn't matter because you're extinct either way, 
right?



- Original Message - 
From: Jiri Jelinek [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, June 13, 2008 4:34 PM
Subject: Re: [agi] Nirvana



a) pain avoidance and pleasure seeking are our primary driving forces;

On Fri, Jun 13, 2008 at 3:47 PM, Mark Waser [EMAIL PROTECTED] wrote:

Yes, but I strongly disagree with assumption one.  Pain avoidance and
pleasure are best viewed as status indicators, not goals.


Pain and pleasure [levels] might be indicators (or primary action
triggers), but I think it's ok to call pain avoidance and pleasure
seeking  our driving forces. I cannot think of any intentional
human activity which is not somehow associated with those primary
triggers/driving forces and that's why I believe the assumption one
is valid.

Best,
Jiri


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-13 Thread Jiri Jelinek
On Fri, Jun 13, 2008 at 6:21 PM, Mark Waser [EMAIL PROTECTED] wrote:
 if you wire-head, you go extinct

Doing it today certainly wouldn't be a good idea, but whatever we do
to take care of risks and improvements, our AGI(s) will eventually do
a better job, so why not then?

Regards,
Jiri Jelinek


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com