[agi] Motivational Systems again [WAS Re: Hacker intelligence level]

2007-12-01 Thread Richard Loosemore

Mike Tintner wrote:

RL:However, I have previously written a good deal about the design of
different types of motivation system, and my understanding of the likely
situation is that by the time we had gotten the AGI working, its
motivations would have been arranged in such a way that it would *want*
to be extremely cooperative.

You do keep saying this. An autonomous mobile agent that did not have 
fundamentally conflicting emotions about each and every activity and 
part of the world,  would not succeed and survive. An AGI that trusted 
and cooperated with every human would not succeed and survive. Conflict 
is essential in a world fraught with risks, where time and effort can be 
wasted, essential needs can be neglected,  and life and limb are under 
more or less continuous threat. Conflict is as fundamental and essential 
to living creatures and any emotional system as gravity is to the 
physical world. (But I can't recall any mention of it in your writings 
about emotions).


I think the way to resolve your questions is to analyze each one for 
hidden assumptions.


First:  you mention emotions many times, but I talk about 
*motivations*, not emotions.  The thing we call emotions is closely 
related, but it is by no means the same, and it confuses the issues a 
lot to talk about one when we should be talking about the other.


For example, I am writing this in a cafe, and after finishing the above 
paragraph I reached around and picked up my bagel and took a bit.  Why 
did I do that?  My motivation system has a set of things that are its 
current goals, and it did a smooth switch from having the [write down my 
thoughts] motivation in control to having the [take a bite of food] 
motivation in control.


[takes sip of tea]

But I feel no emotions at the moment.  [bite].  Although, a short time 
ago I was trying to drive up a steep hill here in town, to get to an 
orchestra rehearsal, and had to abandon the attempt when it turned out 
that the road had a layer of ice underneath the snow, so that people 
were getting stuck on the hill, with wheels spinning.  Person behind me 
hit the horn when they saw me taking a long time to get turned around 
safely, and hearing the horn made me feel a short burst of anger toward 
the idiot:  that was an emotion, but it was not a motivation, except 
insomuch as I may have felt inclined to do something like turn and give 
them a signal of some kind.


Motivations and emotions are linked, but it is more complex than you 
paint it, and although I often talk about a motivational/emotional 
system, it is the motivational part that is causally important.


Now, your statement about how an AGI would not succeed and survive 
unless it felt conflict ... you are talking about creatures that use the 
standard biological design for a motivational system, which are forced 
to compete against other creatures using nothing but tooth and claw and 
wits.


It is important to see that all of the conditions that force biological 
creatures to have to (a) use only the standard motivational system that 
nature designed, and (b) compete with other creatures using the same 
motivation system in an evolutionary context, DO NOT APPLY to AGI systems.


This is such an obvious point that i find it difficult to begin 
explaining it.  There are no selection pressures, no breeding, no 
limited lifespan, no conflict for food when there is the ability to 
engineer as much as necessary.  There would quite likely be only one, or 
a limited number of AGIs on the whole planet, with no uncontrolled 
growth in their numbers.


On and on the list goes.  Every one of the factors that would cause a 
situation in which an AGI had to compete in order to succeed and 
survive are missing.


The fundamental mistake (which many, many people make) is to simply 
assume that when an AGI is built, it will be dropped into the current 
design of world without substantially changing it:  this is what I have 
called the Everything just the same, but with robots scenario.  It 
makes no sense.  They would change the world - immediately - so as to 
make all that compete to succeed nonsense a thing of the past.


In the rest of your text, below, I will just highlight the places where 
you do the same thing:




No one wants to be extremely cooperative with anybody. 


Of course:  people are built with motivational systems specifically 
designed to NOT be especially cooperative.  So what?


Everyone wants
and needs a balance of give-and-take. (And right away, an agent's 
interests and emotions of giving must necessarily conflict with their 
emotions of taking). 


An agent's?  What agent?  Designed with what kind of motivational 
system?  And did you assume a human-similar one?


Anything approaching a perfect balance of interests

Balance of interests?  Sounds like an evolutionary pressure kind of 
idea what evolution would that be?


between extremely complex creatures/ psychoeconomies with extremely 
complex 

RE: Re[2]: [agi] Self-building AGI

2007-12-01 Thread John G. Rose
 From: Dennis Gorelik [mailto:[EMAIL PROTECTED]
 There are programs that already write source code.
 The trick is to write working and useful apps.

Many of the apps that write code basically take data and statically convert
it to a source code representation. So a code generator may allow you to
design a template existing in say XML and the generator then converts that
to a source code structure. I'm not aware of ones that do more than that
although I assume that there are experimental models. 

 The most important part in writing useful apps is not about writing
 code. It's about gathering/defining requirements and designing the
 system.

This is true in general but there are many apps whose innovativeness depends
on writing the code better as the code is pushing state of the art in
competition with other companies producing similar products. I think AGI
falls into this category and just gathering requirements and filling in the
blanks with standard coding isn't enough mainly due to the resource demands
of AGI.
 
 More intelligent development environments (as well as many other
 tools) can help to build AGI, but development environments cannot
 build AGI by itself.

If you look at nanotechnology one of the goals is to build machines that
build machines. Couldn't software based AGI be similar?

John



 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71166132-535eca


Re[4]: [agi] Self-building AGI

2007-12-01 Thread Dennis Gorelik
John,

 If you look at nanotechnology one of the goals is to build machines that
 build machines. Couldn't software based AGI be similar?

Eventually AGIs will be able to build other AGIs, but first AGI models
won't be able to build any software.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71194112-033f71


Re: [agi] Self-building AGI

2007-12-01 Thread Charles D Hixson

Well...
Have you ever tried to understand the code created by a decompiler?  
Especially if the original language that was compiled isn't the one that 
you are decompiling into...


I'm not certain that just because we can look at the code of a working 
AGI, that we can therefore understand it.  Not without a *LOT* of 
commentary and explanation of what the purpose of certain 
constructions/functions/etc. are.  And maybe not then.  Understanding a 
working AGI may require a deeper stack than we possess, or a greater 
ability to handle global variables.  And when code is self-modifying it 
gets particularly tricky.  I remember one sort routine that I 
encountered that called a short function in assembler.  The reason for 
that call was a particular instruction that got overwritten with a 
binary value that depended on the parameters to the call.  That 
instruction was executed during the comparison step of the loop, which 
was nowhere near the place where it was modified.  It was a very short 
routine, but it took a long time to figure out.  And it COULDN'T be 
translated into the calling language (FORTRAN).  Well...a translation of 
sorts was possible, but it would have been over three times as long 
(with a separate loop for each kind of input parameter, plus some 
overhead for the testing and switching).  Which would mean that some 
programs then wouldn't fit in the machine that was running them.   It 
would also have been slower.   Which means more expensive.


Current languages don't have the same restrictions that Fortran had 
then, they've got different ones.   I think the translator from actual 
code into code for humans would be considerably more complicated than 
an ordinary compiler, if the original code was written by an AI.  
Perhaps even it not.  (Most decompilers only handle the easy parts of 
the code.  Sometimes that's over 90%, but the code that's left can be 
tricky...particularly since most people no longer learn assembler.  It's 
been perhaps 3 decades since I knew the assembler of the computer I was 
programming.)


I don't think an optimizing AI would use any language other than 
assembler to write in, though perhaps a stylized one.  (Not MIX or 
p-code.  Possibly Parrot or jvm code.  Possibly something created 
specially for it to use for it's purpose.  Something regular, but easily 
translated into almost optimal assembler code for the machine that it 
was running on.)


FWIW, most of this is just my ideas, without any backing of expert in 
the field since I've never built a mechanical translator.



Dennis Gorelik wrote:

Ed,

1) Human-level AGI with access to current knowledge base cannot build
AGI. (Humans can't)

2) When AGI is developed, humans will be able to build AGI (by copying
successful AGI models). The same with human-level AGI -- it will be
able to copy successful AGI model.

But that's not exactly self-building AGI you are looking for :-)

3) Humans have different level intelligence and skills. Not all are
able to develop programs. The same is true regarding AGI.


Friday, November 30, 2007, 10:20:08 AM, you wrote:

  

Computers are currently designed by human-level intellitences, so presumably
they could be designed by human-level AGI's. (Which if they were human-level
in the tasks that are currently hard for computers means they could be
millions of times faster than humans for tasks at which computers already
way out perform us.) I mention that appropriate reading and training would
be required, and I assumed this included access to computer science and
computer technology sources, which the peasants of the middle age would not
have access.  



  

So I don't understand your problem.



  

-Original Message-
From: Dennis Gorelik [mailto:[EMAIL PROTECTED] 
Sent: Friday, November 30, 2007 1:01 AM

To: agi@v2.listbox.com
Subject: [agi] Self-building AGI



  

Ed,



  

At the current stages this may be true, but it should be remembered that
building a human-level AGI would be creating a machine that would itself,
with the appropriate reading and training, be able to design and program
AGIs.
  


  

No.
AGI is not necessarily that capable. In fact first versions of AGI
would not be that capable for sure.



  

Consider middle age peasant, for example. Such peasant has general
intelligence (GI part in AGI), right?
What kind of training would you provide to such peasant in order to
make him design AGI?



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71202814-4efdc4


Re: Re[2]: [agi] Lets count neurons

2007-12-01 Thread Matt Mahoney

--- Dennis Gorelik [EMAIL PROTECTED] wrote:

 Matt,
 
  Using pointers saves memory but sacrifices speed.  Random memory access is
  slow due to cache misses.  By using a matrix, you can perform vector
  operations very fast in parallel using SSE2 instructions on modern
 processors,
  or a GPU.
 
 I doubt it.
 http://en.wikipedia.org/wiki/SSE2 - doesn't even mention parallel or
 matrix.

It also doesn't mention that one instruction performs 8 16-bit signed multiply
accumulates in parallel, or various other operations: 16 x 8 bits, 8 x 16
bits, 4 x 32 bits (int or float), or 2 x 64 bit (double) in 128 bit registers.
 To implement the neural network code in the PAQ compressor I wrote vector dot
product code in MMX (4 x 16 bit for older processors) that is 6 times faster
than optimized C/C++.  There is an SSE2 version too.

 Actual difference in size would be 10 times, since your matrix is only
 10% filled.

For a 64K by 64K matrix, each pointer is 16 bits, or 1.6 bits per element.  I
think for neural networks of that size you could use 1 bit weights.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71210692-be60c4


RE: Re[2]: [agi] Self-building AGI

2007-12-01 Thread Ed Porter
I currently think there are some human human-level intelligences who know
how to build most of an AGI, at least enough to get up and running systems
that would solve many aspects of the AGI problem and help us better
understand what, if any other aspects of the problem needed to be solved.  I
think the Novamente team is one example.

Ed Porter
-Original Message-
From: Dennis Gorelik [mailto:[EMAIL PROTECTED] 
Sent: Friday, November 30, 2007 9:42 PM
To: agi@v2.listbox.com
Subject: Re[2]: [agi] Self-building AGI

Ed,

1) Human-level AGI with access to current knowledge base cannot build
AGI. (Humans can't)

2) When AGI is developed, humans will be able to build AGI (by copying
successful AGI models). The same with human-level AGI -- it will be
able to copy successful AGI model.

But that's not exactly self-building AGI you are looking for :-)

3) Humans have different level intelligence and skills. Not all are
able to develop programs. The same is true regarding AGI.


Friday, November 30, 2007, 10:20:08 AM, you wrote:

 Computers are currently designed by human-level intellitences, so
presumably
 they could be designed by human-level AGI's. (Which if they were
human-level
 in the tasks that are currently hard for computers means they could be
 millions of times faster than humans for tasks at which computers already
 way out perform us.) I mention that appropriate reading and training
would
 be required, and I assumed this included access to computer science and
 computer technology sources, which the peasants of the middle age would
not
 have access.  

 So I don't understand your problem.

 -Original Message-
 From: Dennis Gorelik [mailto:[EMAIL PROTECTED] 
 Sent: Friday, November 30, 2007 1:01 AM
 To: agi@v2.listbox.com
 Subject: [agi] Self-building AGI

 Ed,

 At the current stages this may be true, but it should be remembered that
 building a human-level AGI would be creating a machine that would itself,
 with the appropriate reading and training, be able to design and program
 AGIs.

 No.
 AGI is not necessarily that capable. In fact first versions of AGI
 would not be that capable for sure.

 Consider middle age peasant, for example. Such peasant has general
 intelligence (GI part in AGI), right?
 What kind of training would you provide to such peasant in order to
 make him design AGI?


 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71219386-c7a577

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-01 Thread Ed Porter
John,

I tested Exeter, NH to LA at 5371kbs download, and 362Kbs upload. Strangelly
my scores were slightly slower to NYC.


Just throwing out ideas, for example, AGI-at-home PC's in the net could
crawl the web looking for reasonable NL text.  Use current NL tools to guess
parse and word sense.  For each word in text, send it and it surrounding
text, Part of speech labeling, surrounding parse tree, and word sense guess,
to another P2P node that specializes in that word in similar contexts and
separately another P2P node that specializes in similar parse trees.  These
specialist node could then develop statistical models for word senses based
on clustering or other technique.  Then over time the statistical models
would get send down to the reading nodes, and this EM cycle could be
constantly repeated.  

Of course, without the cross-sectional bandwidth of proper AGI hardware, you
are going to be severely limited from doing a lot of the things you would
really like to be able to do.  But I think you should be able to come up
with pretty good word sense models.

Ed Porter

-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED] 
Sent: Friday, November 30, 2007 2:55 PM
To: agi@v2.listbox.com
Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

Ed,

That is probably a good rough estimate. There are more headers for the more
frequently transmitted smaller messages but a 16 byte header may be a bit
large.

Here is a speedtest link - 
http://www.speedtest.net/ 

My Comcast cable from Denver to NYC tests at 3537 kb/sec DL and 1588 kb/sec
UL much larger than the calculations 256kb/sec. The variance between tests
to the same location is quite large on the DL side but UL is relatively
stable. Saturating either DL or UL would impact the other.

You can get higher efficiencies if you use UDP transmission without message
serialization. Also you can do things like compression, only sending
changes, etc.. 

Distributed crawling with NL learning fits the scenario well since nodes
download at higher speeds, process the download into a smaller dataset, then
UL communicate the results to the server or share with peers. When one peer
shares with many peers you hit the UL limit fast though so it has to be
managed. And you have to figure out how the knowledge will be spread out -
server centric, shared, hybrid... As the knowledge size increases with peer
storage you have to come up with distributed indexes.

John


 -Original Message-
 From: Ed Porter [mailto:[EMAIL PROTECTED]
 Sent: Friday, November 30, 2007 12:06 PM
 To: agi@v2.listbox.com
 Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI
 research]
 
 John,
 
 Thanks.  I guess that means and AGI-at-home system could be both up-
 loading
 and receiving about 27 1K msgs/sec if it wasn't being used for anything
 else
 and the networks weren't backed up in its neck of the woods.
 
 Presumably the number for say 128Byte messages would be say, roughly, 8
 times faster (minus some percent for the latency associated with each
 message, so lets say roughly about 5 times faster or 135msg/sec.  Is
 that
 reasonable?
 
 So, it seems for example it would be quite possible to do
 estimation/maximilation type NL learning in a distributed manner with a
 lot
 of cable-box connected PC's and a distributed web crawler.
 
 Ed Porter
 
 -Original Message-
 From: John G. Rose [mailto:[EMAIL PROTECTED]
 Sent: Friday, November 30, 2007 12:33 PM
 To: agi@v2.listbox.com
 Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI
 research]
 
 Hi Ed,
 
 If the peer is not running other apps utilizing the network it could do
 the
 same. Typically a peer first needs to locate other peers. There may be
 servers involved but these are just for the few bytes transmitted for
 public
 IP address discovery as many(or most) peers reside hidden behind NATs.
 DNS
 names also require lookups but these are just for doing the initial
 match of
 hostname to IP address, if DNS is used at all.
 
 We're just talking basic P2P, one peer talking to one other peer,
 nothing
 complicated. As you can imagine P2P can take on many flavors as the
 number
 of peers increases.
 
 John
 
  -Original Message-
  From: Ed Porter [mailto:[EMAIL PROTECTED]
  Sent: Friday, November 30, 2007 10:10 AM
  To: agi@v2.listbox.com
  Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI
  research]
 
  John,
 
  Thanks.
 
  Can P2P transmission match the same roughly 27 1Kmsg/sec rate as the
  client
  to server upload you discribed?
 
  Ed Porter
 
  -Original Message-
  From: John G. Rose [mailto:[EMAIL PROTECTED]
  Sent: Thursday, November 29, 2007 11:40 PM
  To: agi@v2.listbox.com
  Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI
  research]
 
  OK for a guestimate take a half-way decent cable connection say
 Comcast
  on a
  good day with DL of 4mbits max and UL of 256kbits max with an
  undiscriminated protocol,