RE: [agi] My proposal for an AGI agenda

2007-03-23 Thread John Rose
> PowerBuilder and Visual FoxPro.  Everyone is entitled to their 
> opinion but if Math wasn't required at all in all my career, I fail 
> to see how it is necessary for the creation on an AGI or 

Building an AGI without math, I wonder what the design would look like.

> different creature.  How much Math do you think is 
> used by the brains of most human beings?  

Lots.  What math is not in there?  Where did all our math come from?  That
one string of zeros and ones that your whole computer language boils down
into and can be described with, how many bits is it and how many bits can
one brain be described with, and what would the mapping be between your
language's string and a brain's string.  If you could simplify that mapping
by incorporating some of it into your language's string, there could be some
pretty useful features in there.

> Would you define total introspection and many built-in tools to create
> efficient programs using programs to be "same old things rehashed"?  

Yes.

For a language to succeed nowadays, there are new ones all the time, you
need something to get people using it.  If you could imagine a really,
really super advanced language created by super-intelligent giant brained
aliens (seriously) or created by their alien supercomputer, what would that
language be like?  Would it be a mishmash of lowest common denominators of
current "earth" computer languages permuted into something different and
optimized a little more?  What would it really have.  It would have features
that are breathtaking.  Would it have for-loops where the syntax is changed
a little?  Or OOP enhanced just a bit?  No.  You would see stuff that would
make your eyes twitch.  This may sound like a crazy way of looking at it and
perhaps for some not really useful but what reference points do we have for
new languages that would be useful?  I'm sure your language is more than
just a rehash and I'm not trying to put it down I'm just trying to generate
some ideas because realistically you could add one unique feature that could
potentially propel it into stardom.

John

- Original Message - 
From: "John Rose" <[EMAIL PROTECTED]>
To: 
Sent: Thursday, March 22, 2007 10:26 PM
Subject: RE: [agi] My proposal for an AGI agenda


> Enhancements to existing computer languages or new computer languages that
> could possibly grease the wheels for AGI development would be aligning the
> language more closely to mathematics.  Many of the computer languages are
> the same old things rehashed in different though new and evolutionarily
> better ways but nothing I've seen too revolutionary.  Same old looping
> structures, classes, OOP, etc.. nothing new.  But if someone could add
some
> handy math structures into the language - group/ring theory, category
> theory, advanced graph structures have this stuff built right in not
> utilized through add-ons or libraries or coded in house.  These math
> structures should be standardized and made part of the language now.
> Believe me, non mathematician programmers would learn these tools rapidly
> and use them in new and exciting ways.  Now many are going in all sorts of
> wild goose chase directions due to lack of standard mathematical guidance
> built in.  Growing a crop of potential AGI developers who don't need no
math
> PHd's would happen if this came about.  Granted good C++ coders can
> accomplish just about "anything" with the language, there is a complexity
> overhead tradeoff that needs to be maintained in doing so (in exchange for
> speed usually in C++).  But as many here understand, having good patterns,
> algorithms and particularly math systems and structures, speed issues can
be
> bridged and many times eliminated by designing and solving things through
> math verses CPU cycles.  Now naturally AGI systems can and do have
> handcrafted or other incorporated languages built in, these too many times
> suffer from the same limitations.  Though I imagine certain AGI's have
some
> pretty advanced languages cooked up inside.  And perhaps these are the
ones
> that grapple more efficiently with machine and network resource
> limitations
>
> John


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-23 Thread Samantha Atknis

David Clark wrote:

I appreciate the amount of effort you made in replying to my email.
 
Most of your questions would be answered if you read the documentation 
on my site.  The last time I looked, LISP had no built-in database.  
Has this changed recently?  Does it still use that idiotic prefix 
notation?  My internal byte code uses just such a system but it is 
meant for a computer to read not a human.


I read your site.  Lisp does not need a built in database if it can use 
existing databases.  There is Allegro Cache.  The syntax is not at all 
"idiotic".  It is a simple, yet packs incredible power.  It is what 
enables data is code and code is data.  It is a large part of why Lisp 
is the premier language  for creating DSLs and is the language the 
majority of new language features were first invented and tested in.   
But that you simply call it "idiotic" probably means it is not 
worthwhile to attempt to convince you.


I built my first PC in 1976.  In 1985 I created a persistent distributed 
OO development platform.  Do you really want this kind of contest or do 
your points stand or fall on their own?


- samantha


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-23 Thread Ben Goertzel

David Clark wrote:

I appreciate the amount of effort you made in replying to my email.
 
Most of your questions would be answered if you read the documentation 
on my site.  The last time I looked, LISP had no built-in database.


Allegro Lisp has a very nice (easy to use, scalable, damn fast) built in 
database, FYI



ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-23 Thread David Clark
I appreciate the amount of effort you made in replying to my email.

Most of your questions would be answered if you read the documentation on my 
site.  The last time I looked, LISP had no built-in database.  Has this changed 
recently?  Does it still use that idiotic prefix notation?  My internal byte 
code uses just such a system but it is meant for a computer to read not a human.

I set out a list of attributes that I believed a language should have to create 
an AGI.  I found no SINGLE language that contained even a subset of those 
attributes.  I have spent a huge amount of time creating my system to it's 
current level but I would stop my development, today, if a single computer 
language could meet my original spec list.

I bought my first PC in 1976.  I have owned many different micro computer 
systems even before the PC was invented.  I helped design and program a PC from 
the ground up in 1979 including a muti-user CPM operating system that ran in 
64K of memory.  I worked on a Mac to convert a language/database program in 
1984.  I have used main frames, mini's and many non PC systems.  I have 
programmed in at least 30 computer languages and I have created more than a few 
myself.  I don't think I know everything about micros or languages but I am not 
a newbie or without PC experience.  Unlike most people that have had a long 
career, I have spent ALL my career (30+ years) using micro computers.

Linux stinks!  I have had my linux box at my right knee now for the past 6 
years and I can't wait to get rid of it.  I dislike the linux culture, the 
cryptic operating system commands, the massive number of versions and the buggy 
(free) code.  Six years ago, I bought over 1 foot high of documentation and 
taught myself Linux, VIM, Apache, PHP3, MySQL, HTML, and sendmail.  I had a 
commercial website up and running in 30 days that logged over 150k in sales in 
it's first 3 months without advertising.  It now is automatically updated and 
synchronized from my clients computer 600 miles away multi times a day.  You 
might not agree with my conclusions but I wanted you to know that my 
conclusions are based on first hand experience and not from books or third 
parties!

Most of the PC's in the world run Windows.  I personally don't much like 
Microsoft but with 95% of users using it, I think I'll stick with the crowd.  I 
dislike Mac and Linux much more.  That's just my professional opinion.

I have programmed my language in C++ but only the small amount of code that 
implements the IDE is actually not in portable C.  Very little of the 
non-standard or Windows specific functions were used in my code.  I do have 
plans to implement the language on Linux in the future but many other things 
are a higher priority.

"If your language is one OS dependent then it sucks from the beginning."

I guess this is your form of *constructive criticism*.  Thanks!

"You say you looked at different languages such as Lisp but your comments say 
you didn't really understand what you were looking at all."

What exact examples do you disagree with?  Do you think the examples I took 
verbatim from the official Lisp web site were made up?  Do YOU understand or do 
you just know the code words?

-- David Clark




  - Original Message - 
  From: Samantha Atknis 
  To: agi@v2.listbox.com 
  Sent: Friday, March 23, 2007 3:37 PM
  Subject: Re: [agi] My proposal for an AGI agenda



  I think you are pretty much stark raving here. :-)   Linux is not cryptic 
compared to Windoze in the least.  I have worked with Window many, many times.  
When I have a Windows machine that I bought myself it inevitably gets converted 
to Linux within 6 months.  Usually this is after I have had to reinstall the OS 
at least once to fix the breakage that is the  Windows registry.  It IDEs for 
C/C++ are not bad if you don't mind having window-isms leak into every part of 
your code.  .Net is a step into much better direction but I can get most of 
that goodness under Mono and run on more platforms.   Windows GUI is not very 
good at all once you have lived on the Mac world for very long.  The Mac 
running OS X gives most of the advantages of Unix on a very visually 
well-designed system with excellent tools.  Windows is a disaster for hacking.  
Most of the nomal cli tools are utterly brain dead.  I always load cygwin and 
live under it as much as possible on windoze.  We want even get into the virus 
breeding idiocy that is Windows.  

  At the least go the Mac route.  That is my choice today for the most of the 
things I care about.  Especially now that I can easily run Linux and/or Windows 
when I need to in virtual machines (Parallels).   For many years I did not have 
any Windows machine and I had no trouble working just fine with the rest of the 
world.  What few Windows programs I needed ran fine under Wine or had quite 
useable alternatives. 

  If your language is one OS dependent then it sucks from the beginning.


  You say you l

Re: [agi] Why C++ ?

2007-03-23 Thread Samantha Atkins
On Fri, 2007-03-23 at 22:48 -0400, Ben Goertzel wrote:
> Samantha Atknis wrote:
> > Ben Goertzel wrote:
> >>
> >> Regarding languages, I personally am a big fan of both Ruby and 
> >> Haskell.  But, for Novamente we use C++ for reasons of scalability.
> > I am  curious as to how C++ helps scalability.  What sorts of 
> > "scalability"?  Along what dimensions?  There are ways that C++ does 
> > not scale so well like across large project sizes or in terms of 
> > maintainability.   It also doesn't scale that well in terms of  space 
> > requirements if the  class hierarchy gets  too deep or uses much  
> > multiple inheritance  of  non-mixin classes.   It also doesn't scale 
> > well in large team development.  So I am curious what you mean.
> >
> 
> I mean that Novamente needs to manage large amounts of data in heap 
> memory, which needs to be very frequently garbage collected according to 
> complex patterns.
> 

So all collection is being done by hand since C/C++ have no facilities.
But complex heap management could be done in most any language where you
can get at the bits.  Heap management could logically be handled
separately from the what is using the structures to be managed as long
as their is good enough binding between the languages.  But I see here
that having a language relatively "close to the metal" was useful. 

> We are doing probabilistic logical inference IN REAL TIME, for real time 
> embodied agent control.  This is pretty intense.  A generic GC like 
> exists in LISP or Java won't do.
> 

Lisp can though handle allocating large arrays in memory that are then
subdivided.  Exactly what you need can be modeled in Lisp and then tweak
the code generation to make it as efficient as needed.  It would be a
bit unusual but doable. 

> Aside from C++, another option might have been to use LISP and write our 
> own custom garbage collector for LISP.  Or, to go with C# and then use 
> unsafe code blocks for the stuff requiring intensive memory management.
> 

Yes.  Similar path.  It could almost be done in Java at the byte code
level but that would arguably be more unfriendly than C++. 

> Additionally, we need real-time, very fast coordinated usage of multiple 
> processors in an SMP environment.  Java, for one example, is really slow 
> at context switching between different threads.
> 

Depending on the threading model I can see that. Clever hacking can get
around some needs for context switching but then you start stepping
beyond the things Java is good for.  Did you have much opportunity to
form a judgment about Erlang?

> Finally, we need rapid distributed processing, meaning that we need to 
> rapidly get data out of complex data structures in memory and into 
> serialized bit streams (and then back into complex data structures at 
> the other end).  This means we can't use languages in which object 
> serialization is a slow process with limited 
> customizability-for-efficiently.
> 

Lisp could excel at streaming data. Java data streaming isn't that bad
either as it can be customized to stream only what you wished streamed
for your specific needs with custom readers and writers per object.  It
is relatively easy to do this custom approach in Java.  You aren't stuck
with the defaults. 

> When you start trying to do complex learning in real time in a 
> distributed multiprocessor context, you quickly realize that 
> C-derivative languages are the only viable option.   Being mostly a 
> Linux shop we didn't really consider C# (plus back when we started, .Net 
> was a lot less far along, and Mono totally sucked).
> 
> C++ with heavy use of STL and Boost is a different language than the C++ 
> we old-timers got used to back in the 90's.   It's still a large and 
> cumbersome language but it's quite possible to use it elegantly and 
> intelligently.  I am not such a whiz myself, but fortunately some of our 
> team members are.
> 

Hehehe.  That was the late 80's for me with another C++ stint from
96-99.  STL and Boost give it some of the power of Lisp but in a much
more difficult to understand and extend manner.  :-) 

I can see the choice for tight management of memory for sure.  I have
some thought so using C# at least for optimizing a general cache I
devised myself.  I know less about the story of C, C++ utilization of
newer multi-core processors.  It is my understanding that most of the
compilers still suck at taking advantage of such things.  

Lisp or Java should do fine with some tweaking at interprocess streaming
of arbitrarily complex data.  

Thanks for the interesting and informative answer.  

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] Re: Why C++ ?

2007-03-23 Thread Ben Goertzel


BTW I think I have answered that question at least 5 times on this list 
or on the SL4 list.  I'm almost motivated to make a "Novamente FAQ" to 
avoid this sort of repetition!!! 


ben


Ben Goertzel wrote:

Samantha Atknis wrote:

Ben Goertzel wrote:


Regarding languages, I personally am a big fan of both Ruby and 
Haskell.  But, for Novamente we use C++ for reasons of scalability.
I am  curious as to how C++ helps scalability.  What sorts of 
"scalability"?  Along what dimensions?  There are ways that C++ does 
not scale so well like across large project sizes or in terms of 
maintainability.   It also doesn't scale that well in terms of  space 
requirements if the  class hierarchy gets  too deep or uses much  
multiple inheritance  of  non-mixin classes.   It also doesn't scale 
well in large team development.  So I am curious what you mean.




I mean that Novamente needs to manage large amounts of data in heap 
memory, which needs to be very frequently garbage collected according 
to complex patterns.


We are doing probabilistic logical inference IN REAL TIME, for real 
time embodied agent control.  This is pretty intense.  A generic GC 
like exists in LISP or Java won't do.


Aside from C++, another option might have been to use LISP and write 
our own custom garbage collector for LISP.  Or, to go with C# and then 
use unsafe code blocks for the stuff requiring intensive memory 
management.


Additionally, we need real-time, very fast coordinated usage of 
multiple processors in an SMP environment.  Java, for one example, is 
really slow at context switching between different threads.


Finally, we need rapid distributed processing, meaning that we need to 
rapidly get data out of complex data structures in memory and into 
serialized bit streams (and then back into complex data structures at 
the other end).  This means we can't use languages in which object 
serialization is a slow process with limited 
customizability-for-efficiently.


When you start trying to do complex learning in real time in a 
distributed multiprocessor context, you quickly realize that 
C-derivative languages are the only viable option.   Being mostly a 
Linux shop we didn't really consider C# (plus back when we started, 
.Net was a lot less far along, and Mono totally sucked).


C++ with heavy use of STL and Boost is a different language than the 
C++ we old-timers got used to back in the 90's.   It's still a large 
and cumbersome language but it's quite possible to use it elegantly 
and intelligently.  I am not such a whiz myself, but fortunately some 
of our team members are.


-- Ben G






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] Why C++ ?

2007-03-23 Thread Ben Goertzel

Samantha Atknis wrote:

Ben Goertzel wrote:


Regarding languages, I personally am a big fan of both Ruby and 
Haskell.  But, for Novamente we use C++ for reasons of scalability.
I am  curious as to how C++ helps scalability.  What sorts of 
"scalability"?  Along what dimensions?  There are ways that C++ does 
not scale so well like across large project sizes or in terms of 
maintainability.   It also doesn't scale that well in terms of  space 
requirements if the  class hierarchy gets  too deep or uses much  
multiple inheritance  of  non-mixin classes.   It also doesn't scale 
well in large team development.  So I am curious what you mean.




I mean that Novamente needs to manage large amounts of data in heap 
memory, which needs to be very frequently garbage collected according to 
complex patterns.


We are doing probabilistic logical inference IN REAL TIME, for real time 
embodied agent control.  This is pretty intense.  A generic GC like 
exists in LISP or Java won't do.


Aside from C++, another option might have been to use LISP and write our 
own custom garbage collector for LISP.  Or, to go with C# and then use 
unsafe code blocks for the stuff requiring intensive memory management.


Additionally, we need real-time, very fast coordinated usage of multiple 
processors in an SMP environment.  Java, for one example, is really slow 
at context switching between different threads.


Finally, we need rapid distributed processing, meaning that we need to 
rapidly get data out of complex data structures in memory and into 
serialized bit streams (and then back into complex data structures at 
the other end).  This means we can't use languages in which object 
serialization is a slow process with limited 
customizability-for-efficiently.


When you start trying to do complex learning in real time in a 
distributed multiprocessor context, you quickly realize that 
C-derivative languages are the only viable option.   Being mostly a 
Linux shop we didn't really consider C# (plus back when we started, .Net 
was a lot less far along, and Mono totally sucked).


C++ with heavy use of STL and Boost is a different language than the C++ 
we old-timers got used to back in the 90's.   It's still a large and 
cumbersome language but it's quite possible to use it elegantly and 
intelligently.  I am not such a whiz myself, but fortunately some of our 
team members are.


-- Ben G



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-23 Thread Samantha Atknis

Ben Goertzel wrote:


Regarding languages, I personally am a big fan of both Ruby and 
Haskell.  But, for Novamente we use C++ for reasons of scalability.
I am  curious as to how C++ helps scalability.  What sorts of 
"scalability"?  Along what dimensions?  There are ways that C++ does not 
scale so well like across large project sizes or in terms of 
maintainability.   It also doesn't scale that well in terms of  space 
requirements if the  class hierarchy gets  too deep or uses much  
multiple inheritance  of  non-mixin classes.   It also doesn't scale 
well in large team development.  So I am curious what you mean.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-23 Thread David Clark
- Original Message - 
  From: Shane Legg 
  To: agi@v2.listbox.com 
  Sent: Friday, March 23, 2007 10:34 AM
  Subject: Re: [agi] My proposal for an AGI agenda




  On 3/23/07, David Clark <[EMAIL PROTECTED]> wrote:
I have a Math minor from University but in 32 years of computer work, I
haven't used more than grade 12 Math in any computer project yet.

  ...

I created a bond comparison program for a major wealth investment firm that
used a pretty fancy formula at it's core but I just typed it in.  I didn't
have to create it, prove it or even understand exactly why it was any good. 

  IMO, creating an AGI isn't really a programming problem.  The hard part is
  knowing exactly what to program.  The same was probably true of your bond
  program: The really hard part was originally coming up with that 'fancy 
formula' 
  which you just had to type in.

Both the code and algorythmn must be good for any computer system to work and 
neither is easy.  The bond formula was published for many years but this 
particular company certainly didn't have a copy of it inside a program they 
could use.  The formula was 1 line of at least 4K lines of code.  The program 
wasn't so trivial either :)  The formula didn't do they any good at all until 
my program made it useful to them.  The first year, the program netted the 
company's clients over 10M in extra profits.


  Thus far math has proven very useful in many areas of artificial intelligence,
  just pick up any book on machine learning such as Bishop's.  Whether it will
  also be of large use for AGI... only time will tell. 

  Shane

Is the research on AI full of Math because there are many Math professors that 
publish in the field or is the problem really Math related?  Many PhDs in 
computer science are Math oriented exactly because the professors that deem 
their work worth a PhD are either Mathematicians or their sponsoring professor 
was.

I have read many books on AI and I had a Math professor as a business partner 
in my computer work for 10 years.  I would challenge anyone to find a non Math 
oriented major computer program that was designed and implemented by a formal 
Mathematician.  I can give you many examples where they weren't.

-- David Clark

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-23 Thread Samantha Atknis

David Clark wrote:


My AGI and my language was never designed to have "millions and billions of
asynchronous, concurrent units".  My design certainly has the ability for
run a few thousand events simultaneously and pass messages efficiently
between them but this is not what you are asking.  My language would be
totally inadequate for the kind of solution you envision but I would argue
that for the kinds of systems I envision, current languages are equally
ill-suited.  I have a summary of the features in my system and some examples
concerning why Lisp wasn't suitable at  www.rccconsulting.com/aihal.htm
  

Here are a few thoughts on the list at your site.

The following is the list of language requirements I thought an AI would 
minimally need.


  1. Object Oriented.

OO is great except when it isn't.  What features do you want from OO?  
Encapsulation? Modularity? Inheritance? Reuse?  All wonderful but OO is 
not the only and in some situation not the best way to achieve them.  
Single dispatch or multi-dispatch?  Class based or prototype based or a 
mix?  Frame type contstructs?  Inheritance or delegation? 

2. All Classes, Methods, Properties, Objects can be created and changed 
easily by the program itself.


Is there a  MOP?  Can the implementation details be tweaked freely?  Can 
cross cutting concerns be cleaned expressed? 


3. Efficiently run many programs at the same time.

What is and isn't a "program" or unit of computation in your thinking?  
Is this part of the language or the OS or do you agree with Alan Kay 
that an OS is where you put all that stuff you failed to account for it 
the language?


4. Fully extensible.

So there is a MOP and the constructs of the language can be change, 
added to, implemente differently fairly freely?\


5. Built-in IDE (Interactive Development Environment) that allows 
programs to be running concurrently with program edit, compile and run.


IDE is just another program or set of programs.  I don't see it as being 
particular built-in except languages can support interpretation, 
reflection, JIT compilation, run time code generation and loading of 
code and so on that make development and deployment much more 
efficient.   Lisp machines had exactly the type of IDE you are talking 
about.  Smalltalk still does.  Both Lisp and Smalltalk allow changing 
code on the fly without restart. 


6. As simple as possible.

Hard to get much simpler than Lisp with as much power.

7. Quick programming turnaround time.

As above concerning interactive iterative devolopment without 
edit-compile-link-test cycles.  If you are concerned with portability 
writing the language in Java or even smalltalk would give you guaranteed 
portability on about every system out there.  You could also write in 
Python or Ruby and be much more productive that in C/C++ although you 
would want to later optimize crucial sections.


8,Fast where most of the processing is done.

In the language or in things written in the language or both?  Lisp has 
been interpreted and compiled simultaneously and nearly seamlessly for 
20 years and has efficiency approaching compiled C in many problem 
domains.   

Only some programs spend most of their time in indexing, searching and 
other things you write about optimizing internally. 

Lisp allows the creation of new "primitives", optimized as highly as you 
can imagine at any time. 

If this is mainly about developer productivity and power then getting 
too caught up in efficiency before addressing productivity and 
expressiveness and maintainability would be a mistake.


9. Simple hierarchy of Classes and of Objects.

  1. Simple Class inheritance.
  2. Simple external file architecture.
  3. Finest possible edit and compile without any linking of object
 modules.
  4. Scalable to relatively large size.
  5. Built in SQL, indexes, tables, lists, stacks and queues.
  6. Efficient vector handling of all data types.
  7. Internet interface.
  8. Runs on Windows PC machines.
  9. Can run as multiple separate systems on the same computer or in
 the background

Class inheritance is only one of many useful mechanisms.  Single or 
multiple?  Mixins supported?  Typing ?  What do you have in mind as 
needed external file arhicecture?  What kind of module structure?  SQL?  
Why?  Don't you want more OQL at least?  What kind of tables?  Data 
structures are trivial to build up in any reasonably powerful language 
so I guess some of this is about what is in the standard library.


On threading I have build an deployed systems running hundreds of 
threads as appservers on fairly normal mid to high-end desktops under 
both Windows and Linux coded in Java.   So it is certainly false that 
noone does this.  Many locking problems can be ameliorated by using more 
of a work-queue based approach. Often mechanism like thread-local 
storage can further reduce the amount of locking needed.  Threads can be 
done in a much more lightweight manner than some of the standard

Re: [agi] My proposal for an AGI agenda

2007-03-23 Thread Jey Kottalam

On 3/23/07, rooftop8000 <[EMAIL PROTECTED]> wrote:

Suppose there was an AGI framework that everyone could add
their ideas to..   What properties should it have? I listed
some points below. What would it take for
you to use the framework? You can add points if you like.



I don't understand. Is the hypothesis that if we have enough people
writing and contributing AI modules according to their conception of
intelligence, and we wire up the modules up to each other, then AGI
will {result, emerge, pop out}? This doesn't sound like a feasible
approach. And, if there isn't a coherent picture of how the modules
are supposed to interact, how can you choose the design of
infrastructure like the language, organization, and knowledge base?
This seems backwards, to choose a design for the infrastructure then
fit an AGI design to the infrastructure. It's analogous to "I don't
know to build a house, but I know I want to use a sledgehammer to do
it." :-)

-Jey Kottalam

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-23 Thread David Clark
- Original Message - 
From: "rooftop8000" <[EMAIL PROTECTED]>
To: 
Sent: Friday, March 23, 2007 1:48 PM
Subject: Re: [agi] My proposal for an AGI agenda


> Suppose there was an AGI framework that everyone could add
> their ideas to..   What properties should it have? I listed
> some points below. What would it take for
> you to use the framework? You can add points if you like.
>
>
> 1. collaboration. is it possible to focus all attention/ work
> in one big project? will it be too complex and unmanageable?

I think that specific parts can be designated to multiple developers in a
coherent manner without requiring full time effort by everyone.  An adhoc or
"anything goes" approach might be interesting but without direction it would
never get anywhere.

> 2. supported computer languages? many or one?

Multi language design won't work IMO.  No one could just pop a whole set of
routines into someone else's computer and watch the result.  Different
language modules couldn't just fit together as needed.  Larger existing AGI
projects could communicate in the future by using sockets like Novamente,
A2I2 or others but this communication with proprietary systems (or other
language designs) wouldn't be the same as people working in the same
environment.

> 3. organization?
> -try to find a small set of algorithms (seed AI)?
> -allow everything that is remotely useful?

If you think you can breed an intelligence from a "small set of algorithms"
then why not just make it and see if you can? (evolutionary algorithms
wouldn't have to be excluded from the research)  Limiting what people could
work on isn't a good idea but some lines of research have been tried and
found wanting.  People could be more useful by working in areas that most
others in the group believed to be most promising.  Unless someone is
getting paid, however, it is difficult to force them to NOT work in an area
they have interest in.  Not all code would have to be included in the AGI
design just because it was made.

> 4. what is the easiest way for people to contribute?
> how could existing algorithms be added easily?
> -neural networks
> -logic
> -existing knowledge bases

I don't think the number of algorithms matters as much as getting some
promising results quickly.  Even some small promising results!

> 5. what kind of problems should be the focus first?
>  -visual/real world robotics?
>  -abstract problems?

Is a blind person still intelligent?  Can someone still be intelligent if
they can't solve abstract problems?  The better question is: can a person
teach someone anything that doesn't understand your language?  I think to
build models that combine language/context and information would be a good
place to start.

> 6. self-modification?
> -don't waste time on it, it will never give good results?
> -all the parts should be able to be modified easily?

If no self-modification then you have to build all the intelligence into the
data or people have to program the entire AGI by hand.  Does either of these
consequences appeal to you?  If the parts aren't easily modified and you
don't know exactly what might work, then the project doesn't have much
chance.

> 7.organization in modules?
> -what is the best granularity?
> -how to generalize from them (in stead
> of just getting the sum of all the algorithms)?

I suggest that models produce a set of coded patterns.  These patterns could
be accessed to get lists of models that produce similar patterns in other
domains.  Testing could be done using algorithms from the initial domain to
help solve the problem in the new domain.  I think generalization should
occur at all levels, all the time.

> 8. cpu power
> -only allow very fast, optimized algorithms
> -allow anything

Most importantly, it has to work, however, is "Allow anything" ok with you?

> 9. the set of properties required
> -too large to do by hand?
> -try to let properties emerge?

Why not both?  Let the system evolve into what properties make sense.

> 10. KBs. how to use them? how to reuse existing ones

I don't think existing KBs would be very useful because I think that
language/knowledge and context must be coded or learned together.  What
weight would you automatically assign to this information that you haven't
created and don't know the validity or completeness of?

> 11. embodiment?  important or not?

If a person is paralyzed and can only type on a computer, are they still
intelligent?  If a knowledgeable person is helping you out on the internet
using instant messenger (no vision), are they still intelligent or useful?
Embodiment doesn't seem to be a requirement for intelligence in people.

> 12. representations?
> -try to find a small set
> -allow everything

Why not try some small set and then throw out what you don't like and add
new if you think it necessary?

> 13. learning.

Can a system have too many ways of learning?  So long as they work, why
isn't the more the merrier?

> 14. how to make it scale well

Can many pe

Re: [agi] My proposal for an AGI agenda

2007-03-23 Thread Russell Wallace

On 3/23/07, rooftop8000 <[EMAIL PROTECTED]> wrote:


1. collaboration. is it possible to focus all attention/ work
in one big project? will it be too complex and unmanageable?



Like the Web: most contributors don't need to be part of the original
organization.

2. supported computer languages? many or one?


Two. One for the framework code (I'd suggest Java or maybe C#, though C++ is
not entirely without merit), and one for content. Look at Novamente, they
use C++ for the former and a homebrew functional programming language for
the latter.

3. organization?

-try to find a small set of algorithms (seed AI)?
-allow everything that is remotely useful?



Allow everything that any contributor thinks is useful.

4. what is the easiest way for people to contribute?

how could existing algorithms be added easily?
-neural networks
-logic
-existing knowledge bases
-..



Logic. More precisely, content should be written in a notation that looks
like predicate calculus for factual statements blending into something
vaguely like Prolog and Haskell for procedural knowledge. Of course,
graphical tools will also end up being desirable for other forms of
contribution - but they should write stuff in the standard notation at the
back end.

5. what kind of problems should be the focus first?

-visual/real world robotics?
-abstract problems?



Have to start with abstract problems, but need to quickly start making major
progress on visual/spatial.

6. self-modification?

-don't waste time on it, it will never give good results?
-all the parts should be able to be modified easily?



Don't think in terms of self-modifying code - the framework (Java/C#/C++)
should be a conventional program modified only by humans - but of content
being able to casually reuse other content and incorporate it into new
content.

7.organization in modules?

-what is the best granularity?
-how to generalize from them (in stead
of just getting the sum of all the algorithms)?



Fairly fine grained, think Web pages for an analogy. (Reuse needs to
typically be by incorporation rather than purely by reference as in the Web
case, but this should be transparent to the user.)

8. cpu power

-only allow very fast, optimized algorithms
-allow anything



Allow anything.

9. the set of properties required

-too large to do by hand?
-try to let properties emerge?



Like the Web, too large for any one group to do by hand, but not necessarily
too large for the world to do by hand.

10. KBs. how to use them? how to reuse existing ones


It will be desirable to have ways to interface to e.g. relational databases,
the Web etc.

11. embodiment?  important or not?


Moving soon and rapidly in that direction is important, though won't be
using physical robots from day one.

12. representations?

-try to find a small set
-allow everything



Canonical notation as described above. Of course, stacked over that will be
many kinds of semantic layers, so if that's what you're thinking of by
"representations" then allow everything.

13. learning.


Sure. Automated learning will be a useful tool, just can't rely on it
_instead of_ engineering.

14. how to make it scale well


Look for inspiration at existing highly scalable systems such as Usenet, the
Web, Google, file sharing systems, distributed computing projects e.g.
[EMAIL PROTECTED]

15. central structure ?

distributed control?



Distributed, like the Internet.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-23 Thread rooftop8000
Suppose there was an AGI framework that everyone could add 
their ideas to..   What properties should it have? I listed 
some points below. What would it take for 
you to use the framework? You can add points if you like.


1. collaboration. is it possible to focus all attention/ work 
in one big project? will it be too complex and unmanageable? 

2. supported computer languages? many or one?

3. organization? 
-try to find a small set of algorithms (seed AI)?
-allow everything that is remotely useful?

4. what is the easiest way for people to contribute?
how could existing algorithms be added easily?
-neural networks
-logic
-existing knowledge bases
-..

5. what kind of problems should be the focus first?
 -visual/real world robotics?
 -abstract problems?

6. self-modification?
-don't waste time on it, it will never give good results?
-all the parts should be able to be modified easily?

7.organization in modules? 
-what is the best granularity?
-how to generalize from them (in stead
of just getting the sum of all the algorithms)?

8. cpu power
-only allow very fast, optimized algorithms
-allow anything

9. the set of properties required
-too large to do by hand?
-try to let properties emerge?

10. KBs. how to use them? how to reuse existing ones

11. embodiment?  important or not?

12. representations?
-try to find a small set 
-allow everything

13. learning. 

14. how to make it scale well

15. central structure ?
distributed control?

..




 

Need Mail bonding?
Go to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users.
http://answers.yahoo.com/dir/?link=list&sid=396546091

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-23 Thread DEREK ZAHN

David Clark writes:


I looked up SEXPR and the following is what I got.


I think he just was using shorthand for "s expression".

Looking over the web page you linked to, it seems like your approach is 
basically that building an AGI (at least an AGI of the type you are 
pursuing) is at its heart a large software engineering task not much 
different from other large software engineering tasks.  So you're building a 
language that you believe will help you with that engineering task.  Anybody 
who has written software for a long time has opinions about what works best 
for developing large software projects, and there's nothing wrong with you 
embodying your ideas on that subject in your language.


Issues like indentation, punctuation, and so on are clearly not central to 
AGI as an endeavor, but one intriguing aspect of your project is the focus 
on self-modifying code.  Any information you'd like to share about how your 
language constructs make on the fly code generation elegant and natural 
would be very interesting.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-23 Thread David Clark
I looked up SEXPR and the following is what I got.

> This library is intended to provide a minimal C/C++ API to efficiently
create, manipulate, and
> parse LISP-style symbolic expressions. An interpreter for a very minimal,
high-performance
> subset of LISP will eventually be included.


- Original Message - 
From: "Eugen Leitl" <[EMAIL PROTECTED]>
To: 
Sent: Friday, March 23, 2007 9:58 AM
Subject: Re: [agi] My proposal for an AGI agenda

> Interesting. Why did you feel the need to improve on SEXPRs? What does
> it improve on the CL Lisp model?

1. I don't want or need a C/C++ API.
2. I intend to do a lot more than just "LISP-style symbolic expressions"
3. I don't think LISP is very OOPs.
4. I don't like mixing program structure with program functions like LISP.
5. I don't like the prefix notation of LISP or it's unnecessary punctuation.

etc etc

LISP was designed a long time ago by University researchers and they can
have it as far as I am concerned.

My language stands all on it's own.  Anyone wanting a language to manipulate
arbitrary files on their computer can't use it as it is restricted to it's
own directory by design. I could name many applications where my system is
not appropriate but I think AGI isn't one of them.

-- David Clark


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-23 Thread David Clark

- Original Message - 
From: "Eugen Leitl" <[EMAIL PROTECTED]>
To: 
Sent: Friday, March 23, 2007 10:26 AM
Subject: Re: [agi] My proposal for an AGI agenda

> > A CPU executes instructions including assignment, conditionals and
simple
> > looping.  How can a language not have these things and still be useful.
>
> Does the human brain tissue have assignments, conditionals, and simple
looping?
> I don't think it does, and yet it is good enough that I can understand
your
> message (at least I think so) by the feat of Natural Intelligence.

As I have stated before, computers and people aren't similar.  If you create
hardware that acts in ways like it's biological counterpart then my current
ideas wouldn't be appropriate.  Can you argue that widely available
computers (the one I am using to send you this email) don't have looping
etc?

> If you look at provably optimal computing substrates, they're very
> far removed from what you would consider computation. A classical
> approach looks a look like a silicon compiler, only on a 3d lattice.
> Signal timing and gates are discrete though, which removes any parasitary/
> dirt effects from design.

I enjoy this kind of information but it doesn't help me with my AGI design.
I want an AGI design that models the higher level aspects of our cognition
and how it actually gets there doesn't have to have anything to do with our
brains.  I am glad that you want to build a machine and software to create
intelligence from the bottom up but that is not my approach.  Unlike some
others, I wouldn't hazard a guess as to your success potential but your
approach isn't mine.

> Does your language allow you to do asynchronous message passing (no shared
> memory) across a signalling mesh, involving millions and billions of
> asynchronous, concurrent units?

My AGI and my language was never designed to have "millions and billions of
asynchronous, concurrent units".  My design certainly has the ability for
run a few thousand events simultaneously and pass messages efficiently
between them but this is not what you are asking.  My language would be
totally inadequate for the kind of solution you envision but I would argue
that for the kinds of systems I envision, current languages are equally
ill-suited.  I have a summary of the features in my system and some examples
concerning why Lisp wasn't suitable at  www.rccconsulting.com/aihal.htm


-- David Clark


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-23 Thread Mark Waser
But it seems like what Loosemore wants is an environment that will help 
him **discover** the right AGI design ... this is a different matter 
Or am I misunderstanding?


   Somewhat.  He made it sound that way but I would also assert that having 
such an environment would also certainly help speed up Novamente development 
where you clearly already have and are following a high-level design.  Your 
subprojects and your research into learning algorithms would certainly 
benefit even more.


- Original Message - 
From: "Ben Goertzel" <[EMAIL PROTECTED]>

To: 
Sent: Friday, March 23, 2007 2:56 PM
Subject: Re: [agi] My proposal for an AGI agenda



Mark Waser wrote:

>> IMO, creating an AGI isn't really a programming problem.  The hard
part is knowing exactly what to program. Which is why it turns into a 
programming problem . . . .  I started out as a biochemist studying 
enzyme kinetics.  The only reasonable way to get a reasonable turn-around 
time on testing a new "fancy formula" was to update the simulation 
program myself. If the tools were there (i.e. Loosemoore's environment), 
it wouldn't be a programming problem.  Since they aren't, the programming 
turns into a/the real problem.:-)


Well, programming AGI takes more time and effort now than it would with 
more appropriate programming tools ...


But it seems like what Loosemore wants is an environment that will help 
him **discover** the right AGI design ... this is a different matter 
Or am I misunderstanding?


-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-23 Thread Ben Goertzel

Mark Waser wrote:
>> IMO, creating an AGI isn't really a programming problem.  The hard 
part is knowing exactly what to program. 
 
Which is why it turns into a programming problem . . . .  I 
started out as a biochemist studying enzyme kinetics.  The only 
reasonable way to get a reasonable turn-around time on testing a new 
"fancy formula" was to update the simulation program myself. 
 
If the tools were there (i.e. Loosemoore's environment), it 
wouldn't be a programming problem.  Since they aren't, the programming 
turns into a/the real problem.:-)


Well, programming AGI takes more time and effort now than it would with 
more appropriate programming tools ...


But it seems like what Loosemore wants is an environment that will help 
him **discover** the right AGI design ... this is a different 
matter  Or am I misunderstanding?


-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-23 Thread Mark Waser
>> IMO, creating an AGI isn't really a programming problem.  The hard part is 
>> knowing exactly what to program. 

Which is why it turns into a programming problem . . . .  I started out as 
a biochemist studying enzyme kinetics.  The only reasonable way to get a 
reasonable turn-around time on testing a new "fancy formula" was to update the 
simulation program myself.  

If the tools were there (i.e. Loosemoore's environment), it wouldn't be a 
programming problem.  Since they aren't, the programming turns into a/the real 
problem.:-)
  - Original Message - 
  From: Shane Legg 
  To: agi@v2.listbox.com 
  Sent: Friday, March 23, 2007 1:34 PM
  Subject: **SPAM** Re: [agi] My proposal for an AGI agenda




  On 3/23/07, David Clark <[EMAIL PROTECTED]> wrote:
I have a Math minor from University but in 32 years of computer work, I
haven't used more than grade 12 Math in any computer project yet.

  ...

I created a bond comparison program for a major wealth investment firm that
used a pretty fancy formula at it's core but I just typed it in.  I didn't
have to create it, prove it or even understand exactly why it was any good. 

  IMO, creating an AGI isn't really a programming problem.  The hard part is
  knowing exactly what to program.  The same was probably true of your bond
  program: The really hard part was originally coming up with that 'fancy 
formula' 
  which you just had to type in.

  Thus far math has proven very useful in many areas of artificial intelligence,
  just pick up any book on machine learning such as Bishop's.  Whether it will
  also be of large use for AGI... only time will tell. 

  Shane


--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?list_id=303 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-23 Thread Shane Legg

On 3/23/07, David Clark <[EMAIL PROTECTED]> wrote:


I have a Math minor from University but in 32 years of computer work, I
haven't used more than grade 12 Math in any computer project yet.


...


I created a bond comparison program for a major wealth investment firm
that
used a pretty fancy formula at it's core but I just typed it in.  I didn't
have to create it, prove it or even understand exactly why it was any
good.



IMO, creating an AGI isn't really a programming problem.  The hard part is
knowing exactly what to program.  The same was probably true of your bond
program: The really hard part was originally coming up with that 'fancy
formula'
which you just had to type in.

Thus far math has proven very useful in many areas of artificial
intelligence,
just pick up any book on machine learning such as Bishop's.  Whether it will
also be of large use for AGI... only time will tell.

Shane

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-23 Thread Eugen Leitl
On Fri, Mar 23, 2007 at 08:23:51AM -0700, David Clark wrote:

> I have a Math minor from University but in 32 years of computer work, I
> haven't used more than grade 12 Math in any computer project yet.  I have
> produced thousands of programs for at least 100 clients including creating a
> language/database that sold over 30,000 copies.  I have done system
> programming and lots of assembly language as well as major applications in
> PowerBuilder and Visual FoxPro.  Everyone is entitled to their opinion but
> if Math wasn't required at all in all my career, I fail to see how it is
> necessary for the creation on an AGI or any other major programming effort.

I agree that formal mathematics is probably not a useful tool for
AI, but neurons do process information by physical processes which
sometimes have a close match to human concepts (delay, correlation,
multiply). Of course the constraints of biological systems are very
much like engineering (power footprint, structure minimax) and not
at all like mathematics. Mathematics typically doesn't like to 
deal with relativistic latency, for instance. 
 
> A CPU executes instructions including assignment, conditionals and simple
> looping.  How can a language not have these things and still be useful.

Does the human brain tissue have assignments, conditionals, and simple looping?
I don't think it does, and yet it is good enough that I can understand your
message (at least I think so) by the feat of Natural Intelligence.

If you look at provably optimal computing substrates, they're very
far removed from what you would consider computation. A classical
approach looks a look like a silicon compiler, only on a 3d lattice.
Signal timing and gates are discrete though, which removes any parasitary/
dirt effects from design.

Less conventional computation would be based on an Edge of Chaos dynamic
pattern, which self-organises, homeostates and adapts. 

> I made a point about the efficiency of creating high level languages in the
> language of the AGI.  I argue that this causes a performance hit of up to
> 100x or more depending on the complexity of the code. (less complex means a
> bigger performance hit)  With the tools I have put into my language, higher
> level functional or other languages can easily be made and then compiled
> into the native language for huge cycle savings.  This can't be said for all
> the languages I have looked at so far.

Does your language allow you to do asynchronous message passing (no shared
memory) across a signalling mesh, involving millions and billions of
asynchronous, concurrent units?

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Why evolution? Why not neuroscience?

2007-03-23 Thread Pei Wang

Well, since by "evolutionary" you mean "anything which involves
imperfect replication and selection", it will include all adaptive
systems, and intelligent systems, as well.

In general, I don't mind people to use this word in this way. I just
don't think it is concrete enough for AGI discussions. For example, my
own system would be considered as "evolutionary" by this definition,
and to label it this way will make some people like it, though I would
rather stress its difference from GA/GP, which fit the label much
better.

Evolutionary systems do carry "memory" across generations, but the
mechanism is very different from the "memory" in intelligent systems.

Again, I'm not saying that "evolution" and "intelligence" have nothing
in common (they have a lot in common), but that in AGI research the
current major danger is to ignore their key differences.

Pei

On 3/23/07, Eugen Leitl <[EMAIL PROTECTED]> wrote:

On Fri, Mar 23, 2007 at 11:29:22AM -0400, Pei Wang wrote:

> In general, we should see intelligence and evolution as two different
> forms of adaptation. Roughly speaking, intelligence is achieved

What about intelligence that works evolutionary? I agree that Edelman's
thesis is not well validated, but at least to my knowledge it has
not been ruled out.

> through experience-driven changes ("learning" or "conditioning")
> within a single system, while evolution is achieved through
> experience-independent changes ("crossover" or "mutation") across

Of course evolutionary systems can and do carry memory across
generations. They're not ahistorical.

> generations of systems. The "intelligent" changes are more
> justifiable, gradual, and reliable, while the "evolutionary" changes

What if human intelligence is Darwin-driven?

> are more incidental, radical, and risky. Though the two processes do
> have some common properties, their basic principles and procedures are
> quite different.

--
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Why evolution? Why not neuroscience?

2007-03-23 Thread Eugen Leitl
On Fri, Mar 23, 2007 at 11:29:22AM -0400, Pei Wang wrote:

> In general, we should see intelligence and evolution as two different
> forms of adaptation. Roughly speaking, intelligence is achieved

What about intelligence that works evolutionary? I agree that Edelman's
thesis is not well validated, but at least to my knowledge it has
not been ruled out.

> through experience-driven changes ("learning" or "conditioning")
> within a single system, while evolution is achieved through
> experience-independent changes ("crossover" or "mutation") across

Of course evolutionary systems can and do carry memory across 
generations. They're not ahistorical.

> generations of systems. The "intelligent" changes are more
> justifiable, gradual, and reliable, while the "evolutionary" changes

What if human intelligence is Darwin-driven?

> are more incidental, radical, and risky. Though the two processes do
> have some common properties, their basic principles and procedures are
> quite different.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-23 Thread Eugen Leitl
On Fri, Mar 23, 2007 at 08:36:14AM -0700, David Clark wrote:

> I have created a system that makes "Self modifying code" and I have a design
> that "will make use of self modifying code".  This is exactly why I created
> this language in the first place.

Interesting. Why did you feel the need to improve on SEXPRs? What does
it improve on the CL Lisp model?
 
-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Why evolution? Why not neuroscience?

2007-03-23 Thread Eugen Leitl
On Fri, Mar 23, 2007 at 12:27:55PM -0400, Richard Loosemore wrote:

> Andi's and Pei's comments bring me to a question I was just about to ask.
> 
> What exactly do people mean by "evolutionary computation" anyway?

I don't know what other people mean by it, but I mean by it anything
which involves imperfect replication and selection.

This is a pretty wide envelope, since it can include human cognition
(neural darwinism among neocortex modules) and mature evolutionary
systems, which are quite beyond of the simple above framework.
 
> I always thought I knew what they meant, and it is the same meaning that 
> Andi and Pei are alluding to above (Koza, genetic algorithms, etc):  an 
> approach that stays pretty close to the kind of evolution that DNA likes 
> to do:  with crossover, mutation, etc, operating on things that are 
> represented as strings of symbols.

You've got a genotype, which maps to phenotype, which then gets selected
for, but that's also not very constraining.
 
> As such, I agree with Pei's critique exactly:  evolution is just not a 
> very good metaphor.
> 
> But here is my worry:  are people starting to use "evolutionary" in a 
> more general sense, meaning something like a generalized form of 
> evolution that is really just adaptation?  Would some kinds of NN system 
> be "evolutionary"?  Would evolution be the right word for something that 

It can be.

> tries new possibilities all the time and uses *some* kind of mechanism 
> for strengthening the things that work?  Even if it did not use DNA-like 
> strings of symbols?

What do you understand to be DNA-like? Linear strings? No, it doesn't have
to be a linear string. 
 
> If they were doing this, I'd have to pay more attention.
> 
> I don't think this is happening, but I can't be sure.
> 
> If any one has any more perspective on this, I'd be interested.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: [agi] Why evolution? Why not neuroscience?

2007-03-23 Thread Richard Loosemore

Pei Wang wrote:

The following is what I wrote on this topic somewhere else:

In general, we should see intelligence and evolution as two different
forms of adaptation. Roughly speaking, intelligence is achieved
through experience-driven changes ("learning" or "conditioning")
within a single system, while evolution is achieved through
experience-independent changes ("crossover" or "mutation") across
generations of systems. The "intelligent" changes are more
justifiable, gradual, and reliable, while the "evolutionary" changes
are more incidental, radical, and risky. Though the two processes do
have some common properties, their basic principles and procedures are
quite different.

Pei

On 3/23/07, Andrew Babian <[EMAIL PROTECTED]> wrote:
Eugen discussed evolution as a development process.  I just wanted to 
comment

about what Minsky said in his talk (and I have to thank this list for
pointing out that resource).  He said that the problem with evolution is
that it throws away the information about why bad solutions failed.  That
really has affected my thinking about it, since I was thinking that it at
least sounded like a pretty good idea.  But it is really a very terrible
waste, and I no longer really think it is such a great model to use.  I'm
not sure what adaptations could be made to make up fo that loss, but 
surely

there could be an improvement over evolution, even in a sytem of random
generation and recombination with competitive survival.
andi


Andi's and Pei's comments bring me to a question I was just about to ask.

What exactly do people mean by "evolutionary computation" anyway?

I always thought I knew what they meant, and it is the same meaning that 
Andi and Pei are alluding to above (Koza, genetic algorithms, etc):  an 
approach that stays pretty close to the kind of evolution that DNA likes 
to do:  with crossover, mutation, etc, operating on things that are 
represented as strings of symbols.


As such, I agree with Pei's critique exactly:  evolution is just not a 
very good metaphor.


But here is my worry:  are people starting to use "evolutionary" in a 
more general sense, meaning something like a generalized form of 
evolution that is really just adaptation?  Would some kinds of NN system 
be "evolutionary"?  Would evolution be the right word for something that 
tries new possibilities all the time and uses *some* kind of mechanism 
for strengthening the things that work?  Even if it did not use DNA-like 
strings of symbols?


If they were doing this, I'd have to pay more attention.

I don't think this is happening, but I can't be sure.

If any one has any more perspective on this, I'd be interested.



Richard Loosemore




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-23 Thread David Clark

- Original Message - 
From: "Kevin Peterson" <[EMAIL PROTECTED]>
To: 
Sent: Thursday, March 22, 2007 11:11 PM
Subject: Re: [agi] My proposal for an AGI agenda


> This thread has been going on for what, weeks? The argument has been
> going on since the beginning of time. If you think you need to invent
> a new language to accomplish something, you don't know how to do it,
> and creating the language will accomplish nothing. Every good language
> has been a refinement of techniques that have grown popular in other
> languages.

Is this conclusion based on your personal experience with many programming
languages?  I have found that the ease of programming, speed of execution
and result of any system I have programmed, to be quite dependant on the
language used.  Have you ever tried to make a device driver in Powerbuilder?
Would you like to write an inventory control system in Assembler?

> Solutions are best to well specified problems. "AI is hard" isn't
> specific. "Self modifying code is difficult in Java" is the kind of
> problem that may warrant using a different language. Wait, let me
> qualify that. "Self modifying code is difficult in Java _and I've got
> a design thought up that will make use of self modifying code_" is the
> kind of problem that may warrant using a different language.

I have created a system that makes "Self modifying code" and I have a design
that "will make use of self modifying code".  This is exactly why I created
this language in the first place.

> But AGI is not going to be hacked together by some undergrad between WOW
> sessions once he's given the "right" tools.

Totally in agreement but are all the people on this list undergrads?  Does
the fact that most people given any computer language tool mean that they
will be able to successfully use it?  Is this an argument for NOT creating
better tools?

> The portions of the first seed AGI that are written by humans will not
> be written in a language designed for that project.

I would be interested in knowing what arguments or experience you have that
would lead to this conclusion?  I can't think of any!

-- David Clark


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Why evolution? Why not neuroscience?

2007-03-23 Thread Pei Wang

The following is what I wrote on this topic somewhere else:

In general, we should see intelligence and evolution as two different
forms of adaptation. Roughly speaking, intelligence is achieved
through experience-driven changes ("learning" or "conditioning")
within a single system, while evolution is achieved through
experience-independent changes ("crossover" or "mutation") across
generations of systems. The "intelligent" changes are more
justifiable, gradual, and reliable, while the "evolutionary" changes
are more incidental, radical, and risky. Though the two processes do
have some common properties, their basic principles and procedures are
quite different.

Pei

On 3/23/07, Andrew Babian <[EMAIL PROTECTED]> wrote:

Eugen discussed evolution as a development process.  I just wanted to comment
about what Minsky said in his talk (and I have to thank this list for
pointing out that resource).  He said that the problem with evolution is
that it throws away the information about why bad solutions failed.  That
really has affected my thinking about it, since I was thinking that it at
least sounded like a pretty good idea.  But it is really a very terrible
waste, and I no longer really think it is such a great model to use.  I'm
not sure what adaptations could be made to make up fo that loss, but surely
there could be an improvement over evolution, even in a sytem of random
generation and recombination with competitive survival.
andi

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-23 Thread David Clark
I have a Math minor from University but in 32 years of computer work, I
haven't used more than grade 12 Math in any computer project yet.  I have
produced thousands of programs for at least 100 clients including creating a
language/database that sold over 30,000 copies.  I have done system
programming and lots of assembly language as well as major applications in
PowerBuilder and Visual FoxPro.  Everyone is entitled to their opinion but
if Math wasn't required at all in all my career, I fail to see how it is
necessary for the creation on an AGI or any other major programming effort.

I created a bond comparison program for a major wealth investment firm that
used a pretty fancy formula at it's core but I just typed it in.  I didn't
have to create it, prove it or even understand exactly why it was any good.
I think Math is fine but computer programs from my perspective are a totally
different creature.  How much Math do you think is used by the brains of
most human beings?  By the way, I did complete a 3rd year computer
engineering course in 2000 where the professor tried to show how you could
make and prove programs correct with a totally Math system.  Good thing he
is a professor and doesn't have to work in the real world.

A CPU executes instructions including assignment, conditionals and simple
looping.  How can a language not have these things and still be useful.
Functional languages seem to refute this statement but they still must be
implemented in procedural internal code.  Functional languages like SQL and
report generators are already implemented in procedural languages (so it
doesn't have to be either/or) and more could be as needed.  Is useful and
practical not better than "revolutionary" and limited?

Would you define total introspection and many built-in tools to create
efficient programs using programs to be "same old things rehashed"?  Is
having a very fast built-in database with triggers and stored procedures
just laisser-faire?  Most human invention isn't brand new concepts or ideas
but different and useful combinations of things already known IMO.

I made a point about the efficiency of creating high level languages in the
language of the AGI.  I argue that this causes a performance hit of up to
100x or more depending on the complexity of the code. (less complex means a
bigger performance hit)  With the tools I have put into my language, higher
level functional or other languages can easily be made and then compiled
into the native language for huge cycle savings.  This can't be said for all
the languages I have looked at so far.

If you care to comment on any of my points directly, I would more than happy
to respond.

-- David Clark

- Original Message - 
From: "John Rose" <[EMAIL PROTECTED]>
To: 
Sent: Thursday, March 22, 2007 10:26 PM
Subject: RE: [agi] My proposal for an AGI agenda


> Enhancements to existing computer languages or new computer languages that
> could possibly grease the wheels for AGI development would be aligning the
> language more closely to mathematics.  Many of the computer languages are
> the same old things rehashed in different though new and evolutionarily
> better ways but nothing I've seen too revolutionary.  Same old looping
> structures, classes, OOP, etc.. nothing new.  But if someone could add
some
> handy math structures into the language - group/ring theory, category
> theory, advanced graph structures have this stuff built right in not
> utilized through add-ons or libraries or coded in house.  These math
> structures should be standardized and made part of the language now.
> Believe me, non mathematician programmers would learn these tools rapidly
> and use them in new and exciting ways.  Now many are going in all sorts of
> wild goose chase directions due to lack of standard mathematical guidance
> built in.  Growing a crop of potential AGI developers who don't need no
math
> PHd's would happen if this came about.  Granted good C++ coders can
> accomplish just about "anything" with the language, there is a complexity
> overhead tradeoff that needs to be maintained in doing so (in exchange for
> speed usually in C++).  But as many here understand, having good patterns,
> algorithms and particularly math systems and structures, speed issues can
be
> bridged and many times eliminated by designing and solving things through
> math verses CPU cycles.  Now naturally AGI systems can and do have
> handcrafted or other incorporated languages built in, these too many times
> suffer from the same limitations.  Though I imagine certain AGI's have
some
> pretty advanced languages cooked up inside.  And perhaps these are the
ones
> that grapple more efficiently with machine and network resource
> limitations
>
> John


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Why evolution? Why not neuroscience?

2007-03-23 Thread Andrew Babian
Eugen discussed evolution as a development process.  I just wanted to comment
about what Minsky said in his talk (and I have to thank this list for
pointing out that resource).  He said that the problem with evolution is
that it throws away the information about why bad solutions failed.  That
really has affected my thinking about it, since I was thinking that it at
least sounded like a pretty good idea.  But it is really a very terrible
waste, and I no longer really think it is such a great model to use.  I'm
not sure what adaptations could be made to make up fo that loss, but surely
there could be an improvement over evolution, even in a sytem of random
generation and recombination with competitive survival.
andi

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Why evolution? Why not neuroscience?

2007-03-23 Thread Chuck Esterbrook

It's cool that you posted the material here in case the site goes down
and also for searching purposes (as in searching my mailbox or the
archives), but I just wanted to point out that the original blog entry
has some good links in it. In other words, some of  the text is linked
to related stories.


On 3/23/07, Eugen Leitl <[EMAIL PROTECTED]> wrote:


http://www.greythumb.org/blog/index.php?/archives/193-Why-evolution-Why-not-neuroscience.html#extended

Tuesday, March 20. 2007

Why evolution? Why not neuroscience?

Opinion

I was reading about Numenta's NuPIC platform today, and it occurred to me
that there are really two big promising directions in machine learning/AI
today: evolutionary computation and brain reverse-engineering. Some readers
might be curious as to why I'm working on evolutionary computation and not
neuroscience-based approaches. I thought of a good metaphor to explain, as
well as a few practical reasons.

First of all, my intent is not to say "everyone else's work sucks and my
approach rules!" The prevalence of that kind of counterproductive
ego-parading is one of the things that I don't like about the AI field. The
AI field eats its young. It's one of the reasons I seldom use the term "AI"
to refer to my own work, with the other being the history of over-the-top
hype associated with it. (If you're wondering... no, I don't think that
AutoCore is going to make people immortal or bring about the singularity. It
might help us diagnose diseases though, or find oil, or control robots, or
have really challenging immersive game characters.)

But I do figure that people might be curious, especially since reverse
engineering the brain is definitely the approach that garners the most press
attention. The field seems like it has a funny bias against evolution, but I
digress.

So here's a metaphor. It's not a perfect metaphor, and as I'll explain later
it sounds a little more critical of the other approaches than I really intend
to be. But it is a clear metaphor, which is why I use it.

Imagine that you're trying to figure out what fire is. To me, the brain
reverse engineering approach is like sitting around and meticulously
recording what the fire is doing. "Ok... a blue flicker is followed by two
brighter orange flickers whose cones have the following shape..." By
contrast, I see the evolutionary approach as being more like "Fire happens
when oxygen combines with reducing agents. Heat a reducing agent to a high
enough temperature in the presence of oxygen, and you get a fire." I would
then add "who cares how the fire flickers?"

Like I said, this is a little more uncharitable than I want to be. Staring at
flames is not likely to get you anywhere at all when it comes to
understanding fire, but studying the brain may indeed get you somewhere when
it comes to AI. The metaphor is this: the algorithms, structures, and
biological processes of the brain are the flames, while evolution is the
process of oxidation-reduction that produced them. Studying individual
cross-sections of the totality of what the brain is doing might teach you a
cool algorithm. But I really don't think that the ultimate source of
intelligence is a single algorithm.

I think that the ultimate source of intelligence is the process that
generates algorithms. That process is evolution, in all its self-adaptive
recursive evolvability-evolving glory. Biological evolution is a
process-generating process; an algorithm that generates algorithms (that
generates algorithms that...).

So there you have it. That's why I think that evolutionary approaches are
important. Evolutionary approaches have the potential to generate algorithms,
including but not limited to algorithms like hierarchical temporal memory, on
demand.

Last but not least, there are practical reasons. Approaches like the Numenta
NuPIC platform still require a human engineer to do a fair amount of work
defining a HTM to solve a particular problem. I think that evolutionary
approaches should be able to do even more of the engineering for you, leading
to systems with really really simple interfaces that require the human
programmer to do very little work. AutoCore (to be released in alpha soon!)
is almost there I think; you can do adaptive image classification with it in
188 lines of C++ code including comments and the code required to load the
training images. But I still think we can go farther still (and some problems
are not going to be as straightforward as simple image classification!). I
envision a system where you can design a problem representation in some kind
of high-level meta-language or even a graphical interface and then just hit
"start" and let it run.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?l

Re: [agi] Why evolution? Why not neuroscience?

2007-03-23 Thread Jean-Paul Van Belle
I like the metaphor. The other good reason NOT to go for neuroscience
(i.e. against Ray Kurzweil's uploading the human brain argument) is that
it may *not* scaleable. Nature may well have pushed close to the limit
of biological intelligence (argument in favour = superior intelligence
is a strong survival trait). Double a neural network's size and it won't
work twice as good.
However evolutionary computing also has a problem: the space of
possible algorithms/computer programs becomes gigantic - see the busy
beaver function. So more complex algorithms necessary for A(G)I may
never be found. 
My money (and, of course, personal AGI project ;-) bets on reverse
engineering AI.

>>> Eugen Leitl <[EMAIL PROTECTED]> 2007/03/23 13:15:23 >>>
> Why evolution? Why not neuroscience?
> I was reading about Numenta's NuPIC platform today, and it occurred
to me
> that there are really two big promising directions in machine
learning/AI
> today: evolutionary computation and brain reverse-engineering. Some
readers
> might be curious as to why I'm working on evolutionary computation
and not
> neuroscience-based approaches. 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] Why evolution? Why not neuroscience?

2007-03-23 Thread Eugen Leitl

http://www.greythumb.org/blog/index.php?/archives/193-Why-evolution-Why-not-neuroscience.html#extended

Tuesday, March 20. 2007

Why evolution? Why not neuroscience?

Opinion

I was reading about Numenta's NuPIC platform today, and it occurred to me
that there are really two big promising directions in machine learning/AI
today: evolutionary computation and brain reverse-engineering. Some readers
might be curious as to why I'm working on evolutionary computation and not
neuroscience-based approaches. I thought of a good metaphor to explain, as
well as a few practical reasons.

First of all, my intent is not to say "everyone else's work sucks and my
approach rules!" The prevalence of that kind of counterproductive
ego-parading is one of the things that I don't like about the AI field. The
AI field eats its young. It's one of the reasons I seldom use the term "AI"
to refer to my own work, with the other being the history of over-the-top
hype associated with it. (If you're wondering... no, I don't think that
AutoCore is going to make people immortal or bring about the singularity. It
might help us diagnose diseases though, or find oil, or control robots, or
have really challenging immersive game characters.)

But I do figure that people might be curious, especially since reverse
engineering the brain is definitely the approach that garners the most press
attention. The field seems like it has a funny bias against evolution, but I
digress.

So here's a metaphor. It's not a perfect metaphor, and as I'll explain later
it sounds a little more critical of the other approaches than I really intend
to be. But it is a clear metaphor, which is why I use it.

Imagine that you're trying to figure out what fire is. To me, the brain
reverse engineering approach is like sitting around and meticulously
recording what the fire is doing. "Ok... a blue flicker is followed by two
brighter orange flickers whose cones have the following shape..." By
contrast, I see the evolutionary approach as being more like "Fire happens
when oxygen combines with reducing agents. Heat a reducing agent to a high
enough temperature in the presence of oxygen, and you get a fire." I would
then add "who cares how the fire flickers?"

Like I said, this is a little more uncharitable than I want to be. Staring at
flames is not likely to get you anywhere at all when it comes to
understanding fire, but studying the brain may indeed get you somewhere when
it comes to AI. The metaphor is this: the algorithms, structures, and
biological processes of the brain are the flames, while evolution is the
process of oxidation-reduction that produced them. Studying individual
cross-sections of the totality of what the brain is doing might teach you a
cool algorithm. But I really don't think that the ultimate source of
intelligence is a single algorithm.

I think that the ultimate source of intelligence is the process that
generates algorithms. That process is evolution, in all its self-adaptive
recursive evolvability-evolving glory. Biological evolution is a
process-generating process; an algorithm that generates algorithms (that
generates algorithms that...).

So there you have it. That's why I think that evolutionary approaches are
important. Evolutionary approaches have the potential to generate algorithms,
including but not limited to algorithms like hierarchical temporal memory, on
demand.

Last but not least, there are practical reasons. Approaches like the Numenta
NuPIC platform still require a human engineer to do a fair amount of work
defining a HTM to solve a particular problem. I think that evolutionary
approaches should be able to do even more of the engineering for you, leading
to systems with really really simple interfaces that require the human
programmer to do very little work. AutoCore (to be released in alpha soon!)
is almost there I think; you can do adaptive image classification with it in
188 lines of C++ code including comments and the code required to load the
training images. But I still think we can go farther still (and some problems
are not going to be as straightforward as simple image classification!). I
envision a system where you can design a problem representation in some kind
of high-level meta-language or even a graphical interface and then just hit
"start" and let it run.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303