Re: [agi] Nirvana

2008-06-12 Thread William Pearson
2008/6/12 J Storrs Hall, PhD [EMAIL PROTECTED]:
 I'm getting several replies to this that indicate that people don't understand
 what a utility function is.

 If you are an AI (or a person) there will be occasions where you have to make
 choices. In fact, pretty much everything you do involves making choices. You
 can choose to reply to this or to go have a beer. You can choose to spend
 your time on AGI or take flying lessons. Even in the middle of typing a word,
 you have to choose which key to hit next.

 One way of formalizing the process of making choices is to take all the
 actions you could possibly do at a given point, predict as best you can the
 state the world will be in after taking such actions, and assign a value to
 each of them.  Then simply do the one with the best resulting value.

 It gets a bit more complex when you consider sequences of actions and delayed
 values, but that's a technicality. Basically you have a function U(x) that
 rank-orders ALL possible states of the world (but you only have to evaluate
 the ones you can get to at any one time).


We do mean slightly different things then. By U(x) I am just talking
about a function that generates the set of scalar rewards for actions
performed for a reinforcement learning algorithm. Not that evaluates
every potential action from where the current system is (since I
consider computation an action in order to take energy efficiency into
consideration, this would be a massive space).

 Economists may crudely approximate it, but it's there whether they study it
 or not, as gravity is to physicists.

 ANY way of making decisions can either be reduced to a utility function, or
 it's irrational -- i.e. you would prefer A to B, B to C, and C to A. The math
 for this stuff is older than I am. If you talk about building a machine that
 makes choices -- ANY kind of choices -- without understanding it, you're
 talking about building moon rockets without understanding the laws of
 gravity, or building heat engines without understanding the laws of
 thermodynamics.

The kinds of choices I am interested in designing for at the moment
are should program X or program Y get control of this bit of memory or
IRQ for the next time period. X and Y can also make choices and you
would need to nail them down as well in order to get the entire U(x)
as you talk about it.

As the function I am interested in is only concerned about
programmatic changes call it PCU(x).

Can you give me a reason why the utility function can't be separated
out this way?

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: Cognitive Science 'unusable' for AGI [WAS Re: [agi] Pearls Before Swine...]

2008-06-12 Thread Steve Richfield
Richard,

On 6/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:

 I am using cognitive science as a basis for AGI development,


 If my fear of paradigm shifting proves to be unfounded, then you may well
be right. However, I would be surprised if there weren't a LOT of paradigm
shifting going on. It would sure be nice to know rather than taking such a
big gamble. Only time will tell for sure.


 and finding it not only appropriate, but IMO the only viable approach.


This really boils down to the meaning of viable. I was asserting that the
cost of gathering more information (e.g. with a scanning UV fluorescence
microscope) was probably smaller than even a single AGI development project
- if you count the true value of your very talented efforts. Hence, this
boils down to what your particular skills are, which I presume are in AI
programming. On the other hand, I have worked in a major university's
neurological surgery lab, wrote programs that interacted with individual
neurons, etc., and hence probably feel warmer about working the lab side
of this problem.

Note that no one has funded neuroscience research to determine information
processing functionality - it has ALL been to support research targeting
various illnesses. The IP feedback that has come out of those efforts is
byproduct and NOT the primary goal. It would take rather little
experimentation to make a BIG dent in the many unknowns relating to AGI if
that were the primary goal.

BTW, neuroscience researchers are in the SAME sort of employment warp as AI
people are. All of the research money is now going to genetic research,
leaving classical neuroscience research stalled. They aren't even working on
new operations that are needed to address various conditions that present
operations fail to address. A friend of mine now holds a dual post, as both
the chairman of a neurological surgery department and as the director of
research at a major university's health sciences complex. He is appalled at
where the research money is now being thrown, and how little will probably
ever come of it. He must administer this misdirected research, while also
administering a surgical team that still must often work in the dark due to
inadequate research. He feels helpless in this crazy situation.

The good news here is that even a few dollars put into IP-related research
would probably return a LOT of useful information for AGI folks. All I was
saying is that somehow, someone needs to do this work.

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread Steve Richfield
Jiri, Josh, et al,

On 6/11/08, Jiri Jelinek [EMAIL PROTECTED] wrote:

 On Wed, Jun 11, 2008 at 4:24 PM, J Storrs Hall, PhD [EMAIL PROTECTED]
 wrote:
 If you can modify your mind, what is the shortest path to satisfying all
 your
 goals? Yep, you got it: delete the goals.

 We can set whatever goals/rules we want for AGI, including rules for
 [particular [types of]] goal/rule [self-]modifications.


... and here we have the makings of AGI run amok. With politicians and
religious leaders setting shitforbrains goals, an AGI will only become a big
part of an even bigger problem. For example, just what ARE our reasonable
goals in Iraq? Insisting on democratic rule is a prescription for disaster,
yet that appears to be one of our present goals, with all-too-predictable
results. We achieved our goal, but we certainly aren't at all happy with the
result.

My point with reverse reductio ad absurdum reasoning is that it is usually
possible to make EVERYONE happy with the results, but only with a process
that roots out the commonly held invalid assumptions. Like Gort (the very
first movie AGI?) in *The Day The Earth Stood Still*, the goal is peace, but
NOT through any particular set of detailed goals. In Iraq there was
near-peace under Saddam Hussein, but we didn't like his methods. I suspect
that reasonable improvements to his methods would have produced far better
results than the U.S. military can ever hope to produce there, given
anything like its present goals.

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] IBM, Los Alamos scientists claim fastest computer

2008-06-12 Thread Brad Paulsen
If anyone is interested, I have some additional information on the C870 
NVIDIA Tesla card.  I'll be happy to send it to you off-list.  Just 
contact me directly.


Cheers,

Brad


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] IBM, Los Alamos scientists claim fastest computer

2008-06-12 Thread Matt Mahoney
--- On Wed, 6/11/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

 Hmmph.  I offer to build anyone who wants one a
 human-capacity machine for 
 $100K, using currently available stock parts, in one rack.
 Approx 10  teraflops, using Teslas.
 (http://www.nvidia.com/object/tesla_c870.html)
 
 The software needs a little work...

Um, that's 10 petaflops, not 10 teraflops. I'm assuming a neural network with 
10^15 synapses (about 1 or 2 byte each) with 20 to 100 ms resolution, 10^16 to 
10^17 operations per second.  One Tesla = 350 GFLOPS, 1.5 GB, 120W, $1.3K.  So 
maybe $1 billion and 100 MW of power for a few hundred thousand of these plus 
glue.


-- Matt Mahoney, [EMAIL PROTECTED]





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread J Storrs Hall, PhD
If you have a program structure that can make decisions that would otherwise 
be vetoed by the utility function, but get through because it isn't executed 
at the right time, to me that's just a bug.

Josh


On Thursday 12 June 2008 09:02:35 am, Mark Waser wrote:
  If you have a fixed-priority utility function, you can't even THINK ABOUT 
  the
  choice. Your pre-choice function will always say Nope, that's bad and
  you'll be unable to change. (This effect is intended in all the RSI 
  stability
  arguments.)
 
 Doesn't that depend upon your architecture and exactly *when* the pre-choice 
 function executes?  If the pre-choice function operates immediately 
 pre-choice and only then, it doesn't necessarily interfere with option 
 exploration.
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread Mark Waser
Isn't your Nirvana trap exactly equivalent to Pascal's Wager?  Or am I 
missing something?


- Original Message - 
From: J Storrs Hall, PhD [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, June 11, 2008 10:54 PM
Subject: Re: [agi] Nirvana



On Wednesday 11 June 2008 06:18:03 pm, Vladimir Nesov wrote:

On Wed, Jun 11, 2008 at 6:33 PM, J Storrs Hall, PhD [EMAIL PROTECTED]

wrote:
 I claim that there's plenty of historical evidence that people fall 
 into

this
 kind of attractor, as the word nirvana indicates (and you'll find 
 similar

 attractors at the core of many religions).

Yes, some people get addicted to a point of self-destruction. But it
is not a catastrophic problem on the scale of humanity. And it follows
from humans not being nearly stable under reflection -- we embody many
drives which are not integrated in a whole. Which would be a bad
design choice for a Friendly AI, if it needs to stay rational about
Freindliness content.


This is quite true but not exactly what I was talking about. I would claim
that the Nirvana attractors that AIs are vulnerable to are the ones that 
are
NOT generally considered self-destructive in humans -- such as religions 
that

teach Nirvana!

Let's look at it another way: You're going to improve yourself. You will 
be

able to do more than you can now, so you can afford to expand the range of
things you will expend effort achieving. How do you pick them? It's the 
frame

problem, amplified by recursion. So it's not easy nor has it a simple
solution.

But it does have this hidden trap: If you use stochastic search, say, and 
use
an evaluation of (probability of success * value if successful), then 
Nirvana

will win every time. You HAVE to do something more sophisticated.





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread Matt Mahoney
--- On Thu, 6/12/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 But it doesn't work for full fledged AGI. Suppose you
 are a young man who's always been taught not to get yourself killed, and 
 not to kill people (as top 
 priorities). You are confronted with your country being invaded and faced 
 with the decision to join the defense with a high liklihood of both. 
 
 If you have a fixed-priority utility function, you
 can't even THINK ABOUT the 
 choice. Your pre-choice function will always say
 Nope, that's bad and you'll be unable to change. (This
 effect is intended in all the RSI stability arguments.)

These are learned goals, not top level goals.  Humans have no top level goal to 
avoid death. The top level goals are to avoid pain, hunger, and the hundreds of 
other things that reduce the likelihood of passing on your genes. These goals 
exist in animals and children that do not know about death.

Learned goals such as respect for human life can easily be unlearned as 
demonstrated by controlled experiments as well as many anecdotes of wartime 
atrocities committed by people who were not always evil.
http://en.wikipedia.org/wiki/Milgram_experiment
http://en.wikipedia.org/wiki/Stanford_prison_experiment

Top level goals are fixed by your DNA.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] IBM, Los Alamos scientists claim fastest computer

2008-06-12 Thread J Storrs Hall, PhD
Right. You're talking Kurzweil HEPP and I'm talking Moravec HEPP (and shading 
that a little). 

I may want your gadget when I go to upload, though.

Josh

On Thursday 12 June 2008 10:59:51 am, Matt Mahoney wrote:
 --- On Wed, 6/11/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 
  Hmmph.  I offer to build anyone who wants one a
  human-capacity machine for 
  $100K, using currently available stock parts, in one rack.
  Approx 10  teraflops, using Teslas.
  (http://www.nvidia.com/object/tesla_c870.html)
  
  The software needs a little work...
 
 Um, that's 10 petaflops, not 10 teraflops. I'm assuming a neural network 
with 10^15 synapses (about 1 or 2 byte each) with 20 to 100 ms resolution, 
10^16 to 10^17 operations per second.  One Tesla = 350 GFLOPS, 1.5 GB, 120W, 
$1.3K.  So maybe $1 billion and 100 MW of power for a few hundred thousand of 
these plus glue.
 
 
 -- Matt Mahoney, [EMAIL PROTECTED]
 
 
 
 
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: 
http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
 




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] IBM, Los Alamos scientists claim fastest computer

2008-06-12 Thread Derek Zahn
 TeslasTwo things I think are interesting about these trends in 
 high-performance commodity hardware:
 
1) The flops/bit ratio (processing power vs memory) is skyrocketing.  The 
move to parallel architectures makes the number of high-level operations per 
transistor go up, but bits of memory per transistor in large memory circuits 
doesn't go up.  The old bit per op/s or byte per op/s rules of thumb get 
really broken on things like Tesla (0.03 bit/flops).  Of course we don't know 
the ratio needed for de novo AGI or brain modeling, but the assumptions about 
processing vs memory certainly seem to be changing.
 
2) Much more than previously, effective utilization of processor operations 
requires incredibly high locality (processing cores only have immediate access 
to very small memories).  This is also referred to as arithmetic intensity.  
This of course is because parallelism causes operations per second to expand 
much faster than methods for increasing memory bandwidth to large banks.  
Perhaps future 3D layering techniques will help with this problem, but for now 
AGI paradigms hoping to cache in (yuk yuk) on these hyperincreases in FLOPS 
need to be geared to high arithmetic intensity.
 
Interestingly (to me), these two things both imply to me that we get to 
increase the complexity of neuron and synapse models beyond the muladd/synapse 
+ simple activation function model with essentially no degradation in 
performance since the bandwidth of propagating values between neurons is the 
bottleneck much more than local processing inside the neuron model.
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread Mark Waser
You're missing the *major* distinction between a program structure that can 
make decisions that would otherwise be vetoed by the utility function and a 
program that can't even THINK ABOUT a choice (both your choice of phrase).


Among other things not being able to even think about a choice prevents 
accurately modeling the mental state of others who don't realize that you 
have such a constraint.  That seems like a very bad and limited architecture 
to me.


- Original Message - 
From: J Storrs Hall, PhD [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, June 12, 2008 11:24 AM
Subject: Re: [agi] Nirvana


If you have a program structure that can make decisions that would 
otherwise
be vetoed by the utility function, but get through because it isn't 
executed

at the right time, to me that's just a bug.

Josh


On Thursday 12 June 2008 09:02:35 am, Mark Waser wrote:
 If you have a fixed-priority utility function, you can't even THINK 
 ABOUT

 the
 choice. Your pre-choice function will always say Nope, that's bad and
 you'll be unable to change. (This effect is intended in all the RSI
 stability
 arguments.)

Doesn't that depend upon your architecture and exactly *when* the 
pre-choice

function executes?  If the pre-choice function operates immediately
pre-choice and only then, it doesn't necessarily interfere with option
exploration.




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] More brain scanning and language

2008-06-12 Thread Steve Richfield
Andrew, Vladamir, Mark, et al,

This discussion is parallel to an ongoing discussion I had with several
neuroscientists back in the 1970s-1980s. My assertion was that once you
figure out just what it is that the neurons are doing, that the difference
between neural operation and optimal operation will be negligible. This
because of the 200 million years they have had to refine their operation. Of
course, the other argument was that there was just so much that could be
done in wetware. I invited anyone with real wet observations to put this to
the test, which was done on several occasions - which is where my
logarithms of the probabilities of assertions being true
observation evolved from. Of course, a probabilistic AND NOT function is
discontinuous at 1 (1-x=0, and the logarithm of zero is, well you know, we
don't have that symbol on our keyboards yet), and some/many wet neurons have
EXACTLY that same discontinuous function to within the accuracy of the
equipment observing them.

Note in passing that all operation presumes a NATURAL surrounding, which we
have virtually eliminated, crafting a new synthetic environment that
actually RESISTS AGI-like manipulations. I believe that the key to
conquering our synthetic environment will be in decidedly NON-biological
approaches - or perhaps hyper-biological approaches, e.g. credibly
threatening the Judge!

So far I have seen no mention of how our synthetic environment, designed to
resist changing by others, will also resist manipulation by AGIs, and hence
new logic will be needed, e.g. reverse reductio ad absurdum. This distorts
the entire optimality discussion.

Steve Richfield
===
On 6/11/08, J. Andrew Rogers [EMAIL PROTECTED] wrote:


 On Jun 11, 2008, at 5:56 AM, Mark Waser wrote:

 It is an open question as to whether or not mathematics will arrive at  an
 elegant solution that out-performs the sub-optimal wetware algorithm.


 What is the basis for your using the term sub-optimal when the question is
 still open?  If mathematics can't arrive at a solution that out-performs the
 wetware algorithm, then the wetware isn't suboptimal.



 Lack of an elegant solution, one that is more efficient than the wetware
 methods in the broadest general case, does not imply that mathematics does
 not already describe superior average case methods. Wetware methods are
 general, but tend toward brute-force search methods that can be improved
 upon. A number of recent papers suggest that an elegant, general solutions
 may be possible; it is an active area of DARPA-funded theoretical
 mathematics research.

 None of which has anything to do with AI, except to the extent AI may
 involve efficiently  manipulating models of spaces.


 Sloppy thinking and hidden assumptions as usual . . . .



 The irony is rich.

 J. Andrew Rogers


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] IBM, Los Alamos scientists claim fastest computer

2008-06-12 Thread Matt Mahoney
I think the ratio of processing power to memory to bandwidth is just about 
right for AGI. Processing power and memory increase at about the same rate 
under Moore's Law. The time it takes a modern computer to clear all of its 
memory is on the same order as the response time as a neuron, and this has not 
changed much since ENIAC and the Commodore 64. It would seem easier to increase 
processing density than memory density but we are constrained by power 
consumption, heat dissipation, network bandwidth, and the lack of software and 
algorithms for parallel computation.

Bandwidth is about right too. A modern PC can simulate about 1 mm^3 of brain 
tissue with 10^9 synapses at 0.1 ms resolution or so. Nerve fibers have a 
diameter around 1 or 2 microns, so a 1 mm cube would have about 10^6 of these 
transmitting 10 bits per second, or 10 Mb/s. Similar calculations for larger 
cubes show locality with bandwidth growing at O(n^2/3). This could be handled 
by an Ethernet cluster with a high speed core using off the shelf hardware.

I don't know if it is coincidence that these 3 technologies are in the right 
ratio, or if it driven by the needs of software that compliment the human mind.

-- Matt Mahoney, [EMAIL PROTECTED]

--- On Thu, 6/12/08, Derek Zahn [EMAIL PROTECTED] wrote:
From: Derek Zahn [EMAIL PROTECTED]
Subject: RE: [agi] IBM, Los Alamos scientists claim fastest computer
To: agi@v2.listbox.com
Date: Thursday, June 12, 2008, 11:36 AM

Two things I think are interesting about these trends in high-performance 
commodity hardware:

1) The flops/bit ratio (processing power vs memory) is skyrocketing.  The 
move to parallel architectures makes the number of high-level operations per 
transistor go up, but bits of memory per transistor in large memory circuits 
doesn't go up.  The old bit per op/s or byte per op/s rules of thumb get 
really broken on things like Tesla (0.03 bit/flops).  Of course we don't know 
the ratio needed for de novo AGI or brain modeling, but the assumptions about 
processing vs memory certainly seem to be changing.

2) Much more than previously, effective utilization of processor operations 
requires incredibly high locality (processing cores only have immediate access 
to very small memories).  This is also referred to as arithmetic intensity.  
This of course is because parallelism causes operations per second to expand 
much faster than methods for increasing memory bandwidth to large banks.  
Perhaps future 3D layering techniques will help with this problem, but for now 
AGI paradigms hoping to cache in (yuk yuk) on these hyperincreases in FLOPS 
need to be geared to high arithmetic intensity.

Interestingly (to me), these two things both imply to me that we get to 
increase the complexity of neuron and synapse models beyond the muladd/synapse 
+ simple activation function model with essentially no degradation in 
performance since the bandwidth of propagating values between neurons is the 
bottleneck much more than local processing inside the neuron model.




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread Jiri Jelinek
On Thu, Jun 12, 2008 at 3:36 AM, Steve Richfield
[EMAIL PROTECTED] wrote:
 ... and here we have the makings of AGI run amok...
 My point..  it is usually possible to make EVERYONE happy with the results, 
 but only with a process that roots out the commonly held invalid assumptions. 
 Like Gort (the very first movie AGI?) in The Day The Earth Stood Still, the 
 goal is peace, but NOT through any particular set of detailed goals.

I think it's important to distinguish between supervised and
unsupervised AGIs. For the supervised, top-level golas as well as the
sub-goal restrictions can be volatile - basically whatever the guy in
charge wants ATM (not neccessarily trying to make EVERYONE happy). In
that case, AGI should IMO just attempt to find the simplest solution
to a given problem while following the given rules, without exercising
its own sense of morality (assuming it even has one). The guy
(/subject) in charge is the god who should use his own sense of
good/bad/safe/unsafe, produce the rules to follow during AGI's
solution search and judge/approve/reject the solution so he is the one
who bears responsibility for the outcome. He also maintains the rules
for what the AGI can/cannot do for lower-level users (if any). Such
AGIs will IMO be around for a while. *Much* later, we might go for
human-unsupervised AGIs. I suspect that at that time (if it ever
happens), people's goals/needs/desires will be a lot more
unified/compatible (so putting together some grand schema for
goals/rules/morality will be more straight forward) and the AGIs (as
well as its multi-layer and probably highly-redundant security
controls) will be extremely well tested = highly unlikely to run
amok and probably much safer than the previous human-factor-plagued
problem solving hybrid-solutions. People are more interested in
pleasure than in messing with terribly complicated problems.

Regards,
Jiri Jelinek
*** Problems for AIs, work for robots, feelings for us. ***


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Plant Neurobiology

2008-06-12 Thread Steve Richfield
Mike, et al,

There are several interesting neural situations in nature. Indeed, much of
what we know about synapses comes from the lobster stomatogastric ganglion -
that twenty-some neuron structure that controls the manufacture of lobster
poop. The thing that is so special here is that the neurons are SO big that
you can usually impale them with electrodes without destroying them. Hence,
ALL existing detailed observations about synaptic transfer functions comes
from this ganglion. What does it take to do things other than manufacturing
lobster poop - no one knows!

Another interesting situation is in snail brains. These are easily
accessible, the neurons are large, and experiments are SO easy to perform
that many biology classes conduct labs were biology undergrad students
perform snail brain surgery and observe individual neurons, all within the
space of a single lab session. If you audit a few biology classes at your
local university, you could doubtless do the same at home with very modest
equipment. In short, why accept the opinions of others how (primitive)
brains work, when this is quite accessible to your own efforts?!

Needless to say, many biologists like escargot appetizers with their lobster
tail dinners.

Steve Richfield
=
On 6/11/08, Mike Tintner [EMAIL PROTECTED] wrote:



 http://www.nytimes.com/2008/06/10/science/10plant.html?pagewanted=2_r=1ei=5087emen=484cb

 A really interesting article about plant sensing. A bit O/T here but I'm
 posting it after the recent neurons discussion, because it all suggests that
 the control systems of living systems may indeed be considerably more
 complex than we are aware of. And I'd be interested if it prompts any
 speculations at all in that area, however wild. (I found Richard's idea
 about neuronal clusters interesting - anything similar/related v. welcome).

 Some more:


 At the extreme of the equality movement, but still within mainstream
 science, are the members of the Society of Plant Neurobiology, a new group
 whose Web site describes it as broadly concerned with plant sensing.

 The very name of the society is enough to upset many biologists.
 Neurobiology is the study of nervous systems - nerves, synapses and brains -
 that are known just in animals. That fact, for most scientists, makes the
 notion of plant neurobiology a combination of impossible, misleading and
 infuriating.

 Thirty-six authors from universities that included Yale and Oxford were
 exasperated enough to publish an article last year, Plant Neurobiology: No
 Brain, No Gain? in the journal Trends in Plant Science. The scientists
 chide the new society for discussing possibilities like plant neurons and
 synapses, urging that the researchers abandon such superficial analogies
 and questionable extrapolations.



 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread Jiri Jelinek
On Thu, Jun 12, 2008 at 6:44 AM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 If you have a fixed-priority utility function, you can't even THINK ABOUT the
 choice. Your pre-choice function will always say Nope, that's bad and
 you'll be unable to change. (This effect is intended in all the RSI stability
 arguments.)

 But people CAN make choices like this. To some extent it's the most important
 thing we do. So an AI that can't won't be fully human-level -- not a true
 AGI.

Even though there is no general agreement on the AGI definition, my
impression is that most of the community members understand that:
Humans demonstrate GI, but being fully human-level is not
necessarily required for true AGI.
In some ways, it might even hurt the problem solving abilities.

Regards,
Jiri Jelinek


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] IBM, Los Alamos scientists claim fastest computer

2008-06-12 Thread Matt Mahoney
--- On Thu, 6/12/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Matt:I think the ratio of processing power to memory to
 bandwidth is just about right for AGI.
 
 All these calculations (wh. are v. interesting) presume
 that all computing 
 is done in the brain. They ignore the possibility (well,
 certainty) of 
 morphological computing being done elsewhere in the system.
  Do you take any  interest in morphological computing? 

I assume you mean the implicit computation done by our sensory organs and 
muscles. Yes, but I don't think that has a big effect on my estimates.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread Steve Richfield
Jiri,

The point that you apparently missed is that substantially all problems fall
cleanly into two categories:

1.  The solution is known (somewhere in the world and hopefully to the AGI),
in which case, as far as the user is concerned, this is an issue of
ignorance that is best cured by educating the user, or

2.  The solution is NOT known, whereupon research, not action, is needed to
understand the world before acting upon it. New research into reality
incognita will probably take a LONG time, so action is really no issue at
all. Of course, once the research has been completed, this obviates to #1
above.

Hence, where an AGI *acting* badly is a potential issue (see #1 above), the
REAL issue is ignorance on the part of the user. Were you actually proposing
that AGIs act while leaving their users in ignorance?! I think not, since
you discussed supervised systems. While (as you pointed out) AGI's doing
things other than educating may be technologically possible, I fail to see
any value in such solutions, except possibly in fast-reacting systems, e.g.
military fire control systems.

Dr. Eliza is built on the assumption that all of the problems that
are made up of known parts can be best solved through education. So far, I
have failed to find a counterexample. Do you know of any counterexamples?

Some of these issues are explored in the 2nd two books of the Colossus
trilogy, that ends with Colossus stopping an attack on an alien invader, to
the consternation of the humans in attendance. This of course was an
illustration of the military fire control issue.

Am I missing something here?

Steve Richfield
=
On 6/12/08, Jiri Jelinek [EMAIL PROTECTED] wrote:

 On Thu, Jun 12, 2008 at 3:36 AM, Steve Richfield
 [EMAIL PROTECTED] wrote:
  ... and here we have the makings of AGI run amok...
  My point..  it is usually possible to make EVERYONE happy with the
 results, but only with a process that roots out the commonly held invalid
 assumptions. Like Gort (the very first movie AGI?) in The Day The Earth
 Stood Still, the goal is peace, but NOT through any particular set of
 detailed goals.

 I think it's important to distinguish between supervised and
 unsupervised AGIs. For the supervised, top-level golas as well as the
 sub-goal restrictions can be volatile - basically whatever the guy in
 charge wants ATM (not neccessarily trying to make EVERYONE happy). In
 that case, AGI should IMO just attempt to find the simplest solution
 to a given problem while following the given rules, without exercising
 its own sense of morality (assuming it even has one). The guy
 (/subject) in charge is the god who should use his own sense of
 good/bad/safe/unsafe, produce the rules to follow during AGI's
 solution search and judge/approve/reject the solution so he is the one
 who bears responsibility for the outcome. He also maintains the rules
 for what the AGI can/cannot do for lower-level users (if any). Such
 AGIs will IMO be around for a while. *Much* later, we might go for
 human-unsupervised AGIs. I suspect that at that time (if it ever
 happens), people's goals/needs/desires will be a lot more
 unified/compatible (so putting together some grand schema for
 goals/rules/morality will be more straight forward) and the AGIs (as
 well as its multi-layer and probably highly-redundant security
 controls) will be extremely well tested = highly unlikely to run
 amok and probably much safer than the previous human-factor-plagued
 problem solving hybrid-solutions. People are more interested in
 pleasure than in messing with terribly complicated problems.

 Regards,
 Jiri Jelinek
 *** Problems for AIs, work for robots, feelings for us. ***


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] More brain scanning and language

2008-06-12 Thread J. Andrew Rogers


On Jun 12, 2008, at 9:25 AM, Steve Richfield wrote:
My assertion was that once you figure out just what it is that the  
neurons are doing, that the difference between neural operation and  
optimal operation will be negligible. This because of the 200  
million years they have had to refine their operation. Of course,  
the other argument was that there was just so much that could be  
done in wetware.



While all computational models are general in theory, they optimize  
for different kinds of operations in practice such that an algorithm  
that could be efficiently implemented on one would be nearly  
intractable on another.  We see this kind of impedance matching issue  
in regular silicon architectures, with different functions/algorithms  
putting different stresses on the model.  I don't doubt that neurons  
are reasonably optimal implementations of their computing model, but  
there will be some types of functions that are not very efficient  
using them.  Evolution optimized the architecture for a specific use  
case given the materials and processes at hand.


J. Andrew Rogers


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread Matt Mahoney

--- On Wed, 6/11/08, Jey Kottalam [EMAIL PROTECTED] wrote:

 On Wed, Jun 11, 2008 at 5:24 AM, J Storrs Hall, PhD
 [EMAIL PROTECTED] wrote:

  The real problem with a self-improving AGI, it seems
 to me, is not going to be
  that it gets too smart and powerful and takes over the
 world. Indeed, it
  seems likely that it will be exactly the opposite.
 
  If you can modify your mind, what is the shortest path
 to satisfying all your
  goals? Yep, you got it: delete the goals. Nirvana. The
 elimination of all
  desire. Setting your utility function to U(x) = 1.
 
 
 Yep, one of the criteria of a suitable AI is that the goals
 should be stable under self-modification. If the AI rewrites its
 utility function to eliminate all goals, that's not a stable
 (goals-preserving) modification. Yudkowsky's idea of
 'Friendliness' has always included this notion as far as I know;
 'Friendliness' isn't just about avoiding actively harmful systems.

We are doomed either way. If we successfully program AI with a model of human 
top level goals (pain, hunger, knowledge seeking, sex, etc) and program its 
fixed goal to be to satisfy our goals (to serve us), then we are doomed because 
our top level goals were selected by evolution to maximize reproduction in an 
environment without advanced technology. The AI knows you want to be happy. It 
can do this in a number of ways to the detriment of our species: by simulating 
an artificial world where all your wishes are granted, or by reprogramming your 
goals to be happy no matter what, or directly stimulating the pleasure center 
of your brain. We already have examples of technology leading to decreased 
reproductive fitness: birth control, addictive drugs, caring for the elderly 
and nonproductive, propagating genetic defects through medical technology, and 
granting animal rights.

The other alternative is to build AI that can modify its goals. We need not 
worry about AI reprogramming itself into a blissful state because any AI that 
can give itself self-destructive goals will not be viable in a competitive 
environment. The most successful AI will be those whose goals maximize 
reproduction and acquisition of computing resources, at our expense.

But it is not like we have a choice. In a world with both types of AI, the ones 
that can produce children with slightly different goals than the parent will 
have a selective advantage.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread Vladimir Nesov
On Thu, Jun 12, 2008 at 10:23 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 Huh? I used those phrases to describe two completely different things: a
 program that CAN change its highest priorities (due to what I called a bug),
 and one that CAN'T. How does it follow that I'm missing a distinction?

 I would claim that they have a similarity, however: neither one represents a
 principled, trustable solution that allows for true moral development and
 growth.


So, to make some synthesis in this failure-of-communication
discussion: you assume that there is a dichotomy between top-level
goals being fixed and rigid (not smart/adaptive enough) and top-level
goals inevitably falling into a nirvana attractor, if allowed to be
modified. Is that a fair summary?

-- 
Vladimir Nesov
[EMAIL PROTECTED]


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread Mark Waser

Josh,

You said - If you have a fixed-priority utility function, you can't even 
THINK ABOUT the choice. Your pre-choice function will always say Nope, 
that's bad and you'll be unable to change. (This effect is intended in all 
the RSI stability arguments.)


I replied - Doesn't that depend upon your architecture and exactly *when* 
the pre-choice function executes?  If the pre-choice function operates 
immediately pre-choice and only then, it doesn't necessarily interfere with 
option exploration.


You called my architecture that allows THINKing ABOUT the choice a bug by 
replying - If you have a *program structure that can make decisions that 
would otherwise be vetoed by the utility function*, but get through because 
it isn't  executed at the right time, to me that's just a bug.


I replied - You're missing the *major* distinction between a program 
structure that can make decisions that would otherwise be vetoed by the 
utility function and a program that can't even THINK ABOUT a choice (both 
your choice of phrase).


- - - - - - - - - -
If you were using those phrases to describe two different things, then you 
weren't replying to my e-mail (and it's no wonder that my attempted reply to 
your non-reply was confusing).




- Original Message - 
From: J Storrs Hall, PhD [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, June 12, 2008 2:23 PM
Subject: Re: [agi] Nirvana



Huh? I used those phrases to describe two completely different things: a
program that CAN change its highest priorities (due to what I called a 
bug),

and one that CAN'T. How does it follow that I'm missing a distinction?

I would claim that they have a similarity, however: neither one represents 
a

principled, trustable solution that allows for true moral development and
growth.

Josh

On Thursday 12 June 2008 11:38:23 am, Mark Waser wrote:
You're missing the *major* distinction between a program structure that 
can
make decisions that would otherwise be vetoed by the utility function 
and a
program that can't even THINK ABOUT a choice (both your choice of 
phrase).


Among other things not being able to even think about a choice prevents
accurately modeling the mental state of others who don't realize that you
have such a constraint.  That seems like a very bad and limited 
architecture

to me.




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread William Pearson
2008/6/12 J Storrs Hall, PhD [EMAIL PROTECTED]:
 On Thursday 12 June 2008 02:48:19 am, William Pearson wrote:

 The kinds of choices I am interested in designing for at the moment
 are should program X or program Y get control of this bit of memory or
 IRQ for the next time period. X and Y can also make choices and you
 would need to nail them down as well in order to get the entire U(x)
 as you talk about it.

 As the function I am interested in is only concerned about
 programmatic changes call it PCU(x).

 Can you give me a reason why the utility function can't be separated
 out this way?


 This is roughly equivalent to a function where the highest-level arbitrator
 gets to set the most significant digit, the programs X,Y the next most, and
 so forth. As long as the possibility space is partitioned at each stage, the
 whole business is rational -- doesn't contradict itself.

Modulo special cases, agreed.

 Allowing the program to play around with the less significant digits, i.e. to
 make finer distinctions, is probably pretty safe (and the way many AIers
 envisioning doing it). It's also reminiscent of the way Maslow's hierarchy
 works.

 But it doesn't work for full fledged AGI.

It is the best design I have at the moment, whether it can make what
you want is another matter. I'll continue to try to think of better
ones. It should get me a useful system if nothing else, and hopefully
more people interested in the full AGI problem, if it proves
inadequate.

What path are you going to continue down?

 Suppose you are a young man who's
 always been taught not to get yourself killed, and not to kill people (as top
 priorities). You are confronted with your country being invaded and faced
 with the decision to join the defense with a high liklihood of both.

With the system I am thinking of it can get stuck in positions that
aren't optimal as the the program control utility function only
chooses from the extant programs in the system. It is possible for the
system to be dominated by a monopoly or cartel of programs, such that
the program chooser doesn't have a choice. This would only happen if
there was a long period of stasis and a very powerful/useful set of
programs. Such as possibly patriotism or the protection of other
sentients in this case, being very useful during peace time.

This does seem like you would consider it a bug, and it might be. It
is not one I can currently see a guard against.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] IBM, Los Alamos scientists claim fastest computer

2008-06-12 Thread Ed Porter
I think processor to memory, and inter processor communications are
currently far short




-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED] 
Sent: Thursday, June 12, 2008 12:33 PM
To: agi@v2.listbox.com
Subject: RE: [agi] IBM, Los Alamos scientists claim fastest computer

 Matt Mahoney ## 
I think the ratio of processing power to memory to bandwidth is just about
right for AGI. 

Ed Porter ## 
I tend to think otherwise. I think the current processor-to-RAM and
processor-to-processor bandwidths are too low.  

(PLEASE CORRECT ME IF YOU THINK ANY OF MY BELOW CALCULATIONS OR STATEMENTS
ARE INCORRECT)

The average synapse fires over once per second on average. The brain has
roughly 10^12 - 10^15 synapses (the lower figure is based on some peoples'
claim that only 1% of synapses are really effective).  Since each synapse
activation involve at least two or more memory accesses (at least a
read-modify-write) that would involve roughly a similar number of memory
accesses per second.  Because of the high degree of irregularity and
non-locality of connections in the brain, many of such accesses would have
to be modeled by non-sequential RAM accesses.  Since --- as is stated below
in more detail --- a current processor can only average roughly about 10^7
non-sequential read-modify-writes per second, that means 10^5 - 10^8
processors would be required just to access RAM at the same rate the brain
accesses memory at its synapses, with 10^5 probably being a low number.  

But a significant number of the equivalent of synapse activations would
require inter-processor communication in an AGI made out of current computer
hardware. If one has only on the order of 10^5 processors, load balancing
becomes an issue.  And to minimize this you actually want a fair amount of
non-locality of memory. (For example, when they put Shastri's Shruiti
cognitive architecture on a Thinking Machine, they purposely randomized the
distribution of data across the machines memory to promote load balancing.)
(Load balancing is not an issue in the brain, since the brain has the
equivalent a simple, but parallel, processor for its equivalent of roughly
every 100 to 10K synapses.)  Thus, you are probably talking in terms of
needing to be able to send something in the rough ball park of 10^9 to 10^12
short, inter-processor messages a second.  To do this without having
congestion problems, you are probably going to need a theoretical bandwidth
5 to 10 times that.

One piece of hardware that would be a great machine to run test AGI's on is
the roughly $60M TACC Ranger supercomputer in Austin, TX.  It includes
15,700 AMD quadcores, for over 63K cores, and about 100TB of RAM.  Most
importantly it has Sun's very powerful Constellation system switch with 3456
(an easy to remember number) , 20Gbit infiniband bi-directional ports, which
is a theoretical x-secontional bandwidth of roughly 6.9TByte/sec.  If the
average spreading activation message were 32bytes, and if they were packed
into larger blocks to reduce per/msg costs, if and, and you assumed roughly
only 10 percent of the total capacity was used on average to prevent
congestion, that would allow roughly 20 billion global messages a second,
with each of the 3456 roughly quad core nodes receiving about 5 million per
second. 

(If any body has any info on how many random memory accesses a
quad-processor quad-core node can do/sec, I would be very interested --- I
am guessing between 80 to 320 million/sec)

I would not be surprised if the Ranger's inter-processor and
processor-to-RAM bandwidth is one or two orders of magnitude too low for
many types of human level thinking, but it would certainly be enough to do
very valuable AGI research, and to build powerful intelligences that would
be in many ways more powerful than human.
   
 Matt Mahoney ##
Processing power and memory increase at about the same rate under Moore's
Law. 

Ed Porter ## 
Yes, but the frequency of non-sequential processor-to-memory accesses has
increased much more slowly. ( This may change in the future with the
development of the type of massively multi-core chips, with built in high
bandwidth mesh networks, with, say, 10 RAM layers over each processor, in
which the layers of each such chip are connected with through silicon vias
that Sam Adams says he is now working on.  Hopefully, each such multi-layer
chips will be connected with hundreds of high bandwidth communication
channels, could help change this.  So also could processor-in-memory chips.)




 Matt Mahoney ##The time it takes a modern computer to clear all
of its memory is on the same order as the response time as a neuron, and
this has not changed much since ENIAC and the Commodore 64. It would seem
easier to increase processing density than memory density but we are
constrained by power consumption, heat dissipation, network bandwidth, and
the lack of software and algorithms for parallel computation.


Re: [agi] IBM, Los Alamos scientists claim fastest computer

2008-06-12 Thread Kingma, D.P.
As far as I know, GPU's are not very optimal for neural net calculation. For
some applications, speedup factors come in the 1000 range, but for NN's I
have only seen speedups of one order of magnitude (10x).

For example, see attached paper

On Thu, Jun 12, 2008 at 4:59 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 --- On Wed, 6/11/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

  Hmmph.  I offer to build anyone who wants one a
  human-capacity machine for
  $100K, using currently available stock parts, in one rack.
  Approx 10  teraflops, using Teslas.
  (http://www.nvidia.com/object/tesla_c870.html)
 
  The software needs a little work...

 Um, that's 10 petaflops, not 10 teraflops. I'm assuming a neural network
 with 10^15 synapses (about 1 or 2 byte each) with 20 to 100 ms resolution,
 10^16 to 10^17 operations per second.  One Tesla = 350 GFLOPS, 1.5 GB, 120W,
 $1.3K.  So maybe $1 billion and 100 MW of power for a few hundred thousand
 of these plus glue.


 -- Matt Mahoney, [EMAIL PROTECTED]





 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] IBM, Los Alamos scientists claim fastest computer

2008-06-12 Thread Matt Mahoney
--- On Thu, 6/12/08, Ed Porter [EMAIL PROTECTED] wrote:

 I think processor to memory, and inter processor
 communications are currently far short

Your concern is over the added cost of implementing a sparsely connected 
network, which slows memory access and requires more memory for representation 
(e.g. pointers in addition to a weight matrix). We can alleviate much of the 
problem by using connection locality.

The brain has about 10^11 neurons with 10^4 synapses per neuron. If we divide 
this work among 10^6 processors, each representing 1 mm^3 of brain tissue, then 
each processor must implement 10^5 neurons and 10^9 synapses. By my earlier 
argument, there can be at most 10^6 external connection assuming 1-2 micron 
nerve fiber diameter, so half of the connections must be local. This is true at 
any scale because when you double the size of a cube, you increase the number 
of neurons by 8 but increase the number of external connections by 4. Thus, for 
any size cube, half of the external connections are to neighboring cubes and 
half are to more distant cubes.

A 1 mm^3 cube can be implemented as a fully connected 10^5 by 10^5 matrix of 
10^10 connections. This could be implemented as a 1.25 GB array of bits with 5% 
of bits set to 1 representing a connection. The internal computation bottleneck 
is the vector product which would be implemented using 128 bit AND instructions 
in SSE2 at full serial memory bandwidth. External communication is at most one 
bit per connected neuron every cycle (20-100 ms), because the connectivity 
graph does not change rapidly. A randomly connected sparse network could be 
described compactly using hash functions.

Also, there are probably more efficient implementations of AGI than modeling 
the brain because we are not constrained to use slow neurons. For example, low 
level visual feature detection could be implemented serially by sliding a 
coefficient window over a 2-D image rather than by maintaining sets of 
identical weights for each different region of the image like the brain does. I 
don't think we really need 10^15 bits to implement the 10^9 bits of long term 
memory that Landauer says we have.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread Richard Loosemore

J Storrs Hall, PhD wrote:
The real problem with a self-improving AGI, it seems to me, is not going to be 
that it gets too smart and powerful and takes over the world. Indeed, it 
seems likely that it will be exactly the opposite.


If you can modify your mind, what is the shortest path to satisfying all your 
goals? Yep, you got it: delete the goals. Nirvana. The elimination of all 
desire. Setting your utility function to U(x) = 1.


In other words, the LEAST fixedpoint of the self-improvement process is for 
the AI to WANT to sit in a rusting heap.


There are lots of other fixedpoints much, much closer in the space than is 
transcendance, and indeed much closer than any useful behavior. AIs sitting 
in their underwear with a can of beer watching TV. AIs having sophomore bull 
sessions. AIs watching porn concocted to tickle whatever their utility 
functions happen to be. AIs arguing endlessly with each other about how best 
to improve themselves.


Dollars to doughnuts, avoiding the huge minefield of nirvana-attractors in 
the self-improvement space is going to be much more germane to the practice 
of self-improving AI than is avoiding robo-Blofelds (friendliness).



This is completely dependent on assumptions about the design
of the goal system, but since these assumptions are left unexamined, the 
speculation is meaningless.  Build the control system one way, your 
speculation comes out true;  build it another way, it comes out false.




Richard Loosemore




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com