Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-19 Thread Ricardo Barreira

On 2/18/07, Charles D Hixson [EMAIL PROTECTED] wrote:

You might check out D ( http://www.digitalmars.com/d/index.html ).  Mind
you, it's still in the quite early days, and missing a lot of libraries
... which means you need to construct interfaces to the C versions.
Still, it answers several of your objections, and has partial answers to
at least one of the others.


I was going to try out D some time ago, but decided not to when I
learned that they use Hans Boehm's conservative garbage collector. I
find conservative garbage collection to be very inelegant and too
error prone for my taste, even if it works well in practice for most
projects...

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-19 Thread Lukasz Kaiser

Hi,

I was offline and missed the large discussion so let me just add my 2c:


Cobra is currently at a late alpha stage. There are some docs
(including a comparison to Python) and examples. (And pardon my plain
looking web site, but I have no graphics skills.) Here it is:
http://cobralang.com/


Nice :). You might want to check another open-source .Net language
called Nemerle (nemerle.org). It is quite stable now, reasonably efficient
and has bindings to some IDEs (VS, monodevelop). It is majorly
a functional language and not that python-like, but it has a special
option that allows you to switch to python-like syntax (white-space
and newline delimiters, etc.). And it has very nice lisp-like macros :).


Far and away, the best answer to the best language question is the .NET
framework.  If you're using the framework, you can use any language that has
been implemented on the framework (which includes everything from C# to the
OCAML-like F# and nearly every language in between -- those obviously many
implementations are better than others) AND you can easily intermix
languages (so the answer to best language will vary from piece to piece).


Unluckily, after being involved in .Net for quite some time, I do not
share your optimism. In fact I came to think that .Net is not suitable
for anything that requires really high performance and parallelism.
Perhaps the problem is just that it is very very hard to build a really
good VM and probably impossible to build one that will be good for
more than one programming paradigm. As long as you do imperative
OO programming .Net might be ok and your comments about mixing
languages are right. But if you start doing functional and  generative
programming it will be a pain and a performance bottleneck. In that case
you need things like MetaOCaml (www.metaocaml.org) for generative
programming or OCamlP3l for easy parallelism (ocamlp3l.inria.fr/eng.htm).

- lk

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Samantha Atkins
Richard Loosemore wrote:
 Aki Iskandar wrote:

 Hello -

 I'm new on this email list.  I'm very interested in AI / AGI - but do
 not have any formal background at all.  I do have a degree in
 Finance, and have been a professional consultant / developer for the
 last 9 years (including having worked at Microsoft for almost 3 of
 those years).

 I am extremely happy to see that there are people out there that
 believe AGI will become a reality - I share the same belief.  Most,
 to all, of my colleagues see AI as never becoming a reality.  Some
 that do see intelligent machines becoming a reality - believe that it
 is hardware, not software, that will make it so.  I believe the
 opposite ... in that the key is in the software - the hardware we
 have today is ample.

 The reason I'm writing is that I am curious (after watching a couple
 of the videos on google linked off of Ben's site) as to why you're
 using C++ instead of other languages, such as C#, Java, or Python. 
 The later 2, and others, do the grunt work of cleaning up resources -
 thus allowing for more time to work on the problem domain, as well as
 saving time in compiling, linking, and debugging.

 I'm not questioning your decision - I'm merely curious to learn about
 your motivations for selecting C++ as your language of choice.

 Thanks,
 ~Aki

 It is not always true that C++ is used (I am building my own language
 and development environment to do it, for example), but if C++ is most
 common in projects overall, that probably reflects the facts that:

 (a) it is most widely known, and
 (b) for many projects, it does not hugely matter which language is used.

 Frankly, I think most people choose the language they are already most
 familiar with.  There just don't happen to be any Cobol-trained AI
 researchers ;-).

 Back in the old days, it was different.  Lisp and Prolog, for example,
 represented particular ways of thinking about the task of building an
 AI.  The framework for those paradigms was strongly represented by the
 language itself.


What do you have in mind?  Pretty much every mechanism in any computer
language known was initially developed and often perfected in Lisp. 
Thus it does not seem me that Lisp was at all tied to a particular form
of program or programming much less to some forms of AI.

- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Samantha Atkins
Eugen Leitl wrote:
 On Sat, Feb 17, 2007 at 08:24:21AM -0800, Chuck Esterbrook wrote:

   
 What is the nature of your language and development environment? Is it
 in the same neighborhood as imperative OO languages such as Python and
 Java? Or something different like Prolog?
 

 There are some very good Lisp systems (SBCL) with excellent compilers,
 rivalling C and Fortran in code quality (if you avoid common pitfalls
 like consing). Together with code and data being represented by
 the same data structure and good support of code generation by code
 (more so than any other language I've heard of) makes Lisp an evergreen
 for classical AI domains. (Of course AI is a massively parallel
 number-crunching application, so Lisp isn't all that helpful here).

   
Really?  I question whether you can get anywhere near the same level of
reflection and true data - code equivalence in any other standard
language.  I would think this capability might be very important
especially to a Seed AI.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Eugen Leitl
On Sun, Feb 18, 2007 at 12:40:03AM -0800, Samantha Atkins wrote:

 Really?  I question whether you can get anywhere near the same level of
 reflection and true data - code equivalence in any other standard
 language.  I would think this capability might be very important
 especially to a Seed AI.

Lisp is really great as a language for large scale software systems, which 
do really push the envelope of software development in terms of sheer size and 
complexity
of the result, which is still functional and useful. With parallel (asynchronous
message passing primitives equivalent to at least a subset of MPI) extensions
and run on a suitable (10^6..10^9 nodes) hardware there's no reason why Lisp
couldn't do AI, in principle. It might be not the best tool for the job,
but certainly not the worst, either.

However, the AI school represented here seems to assume a seed AI (an 
open-ended agent
capable of directly extracting information from its environment) is 
sufficiently simple
to be specified by a team of human programmers, and implemented explictly by
a team of human programmers. This type of approach is most clearest represented
by Cyc, which is sterile. The reason is assumption that the internal 
architecture
of human cognition is fully inspectable by human analyst introspection alone, 
and 
that furthermore the resulting extracted architecture is below the complexity 
ceiling 
accessible to a human team of programmers. I believe both assumptions are 
incorrect.

There are approaches which involve stochastical methods,
information theory and evolutionary computation which appear potentially 
fertile,
though the details of the projects are hard to evaluate, since lacking 
sufficient
numbers of peer-reviewed publications, source code, or even interactive 
demonstrations.
Lisp does not particularly excel at these numerics-heavy applications, though 
e.g.
Koza used a subset of Lisp sexpr with reasonably good results. MIT Scheme folks 
demonstrated
automated chip design long ago, so in principle Lisp could play well with 
today's large FPGAs. 

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Chuck Esterbrook

On 2/18/07, Mark Waser [EMAIL PROTECTED] wrote:

Chuck is also absolutely incorrect that the only way to generate code by
code is to use Reflection.Emit.  It is very easy to have your code write
code in any language to a file (either real or virtual), compile it, and
then load the resulting library (real or virtual) anytime you want/need it.


I'm not incorrect--because I never said that. Aki Iskandar brought
that issue up. Then I pointed out that .NET code executes much faster
than Python. I was not stating or implying that Reflection.Emit was
the only means to produce .NET code.

My Cobra compiler, for example, currently generates C# instead
bytecode for numerous advantages:
(a) faster bootstrapping (C# is higher level than bytecode)
(b) leverage the excellent bytecode generation of the C# compiler
(c) use C#'s error checking as an extra guard against deficiencies in
my pre-1.0 compiler


There is absolutely no run-time cost to this method (if you're keeping the
compiled code somewhere in your knowledge base) since you're dealing with
compiled code (as long as you know how to manage spawning and killing
threads and processes so that you don't keep nine million libraries loaded
that you'll never use again).


Well absolutely no run-time cost is a bit strong. Code generation
itself takes time, no matter what technique you use. And if you go the
generate source code route then writing it to disk, invoking a
compiler and linking it back in is a pretty slow process. I've looked
for a way to do it all in memory, but haven't found one. (You can
actually link in the C# compiler as a DLL so it's resident in your
process, but it's API still wants a disk-based file.)

But unless you're throwing away your generated code very quickly
without using it much (seems unlikely), you'll make up the difference
quite easily.

And even dynamically loading DLLs and managing how you use them,
unload them, etc. has *some* cost.


I also wouldn't sneer at using an established enterprise-class database to
serve as one or more of your core knowledge stores.  There is *a lot* of

...

You are absolutely...correct. I think the utility of existing database
servers is very underappreciated in academia and many AI researchers
are from academia or working on academia style projects (gov't
research grants or work to support research--not that there's anything
wrong with that!). But it's too bad as databases have a lot to offer.
Anyone, feel free to ask if you want me to expand.


The dumbest thing AGI researchers do is re-invent the wheel constantly when
isn't necessary.  I'm heartily with Richard Loosemoore and his call for
building a research infrastructure instead of all the walled gardens (with
long, low learning curves and horrible enhancement curves) that we have
currently.


Some reuse is easy. Fairly generic components like languages and
databases are easy to leverage on a project. After that, it gets very
difficult. Normally, something has be documented, be stable, run fast,
be on the same platform *and* be the right fit before it will be
adopted on a serious project.

Regarding platform, while you and I like .NET some people will reject
it because Microsoft (and the former Borland engineers they hired to
work on it), created it. I've talked to people who said they would use
it if it were open source. So I point them to Novell Mono (the open
source clone) at which point they claim they can't use it because
Microsoft will eventually shut Novell down. After I point out that
Microsoft submitted .NET as a published standard so that projects like
Novell Mono could take place, well... then it's on to the next excuse.

One legit excuse is that some people already have a huge investment in
other platforms (Java) and cannot turn that around in terms of time
and money. We're already fragmented.

...

dealing with a whole framework rather than just a language).  And, of
course, all of this ignore the ultimate trump that several flavors of LISP
are available on the .NET framework.


Python also runs on .NET. In fact, Microsoft hired the guy that was
implementing Python on .NET and the project (IronPython) is now hosted
by Microsoft. So now you can have your cake, generate a new one at
runtime, dynamically load it, and eat it, too!

-Chuck

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Aki Iskandar
 current Novamente version).


The dumbest thing AGI researchers do is re-invent the wheel  
constantly when isn't necessary.  I'm heartily with Richard  
Loosemoore and his call for building a research infrastructure  
instead of all the walled gardens (with long, low learning curves  
and horrible enhancement curves) that we have currently.


I also have to dispute Samantha's I question whether you can get  
anywhere near the same level of reflection and true data - code  
equivalence in any other standard language.  Reflection is a core  
functionality of the .NET framework and available to *all* .NET  
languages in a much more computationally convenient form than how  
most of LISP's reflection turns out.  I would also argue that a  
higher level retrospection framework is more necessary and more  
easily built in .NET than in LISP (given that you're dealing with a  
whole framework rather than just a language).  And, of course, all  
of this ignore the ultimate trump that several flavors of LISP are  
available on the .NET framework.


- Original Message - From: Chuck Esterbrook  
[EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, February 17, 2007 5:49 PM
Subject: **SPAM** Re: Languages for AGI [WAS Re: [agi] Priors and  
indefinite probabilities]




On 2/17/07, Aki Iskandar [EMAIL PROTECTED] wrote:
Richard, Danny, Pei, Chuck, Eugen, Peter ... thanks all for  
answering

my question.

...
C# is definitely a productive language, mainly due to the IDE,  
and it

is faster than Java - however, it is strongly typed.
Perhaps the disadvantage to C#, form my perspective, is that the  
only

ways to generate code (by code) is by using Reflection.Emit, and
CodeDOM namespaces.  However, the performance hit is fr to  
costly

to run it - because it has to be compiled (to MSIL / bytecode) and
then the class type has to be loaded, and only then  
interperated / run.


Java suffers the same fate, and is slower than C#.

Python is a duck typed language, and has very rich flexibility when
designing datastructures.  In addition, it has a few ways to  
evaluate

code on the fly (enabling code that writes code).


I've cranked out mounds of Python and C#, so I have a few things to
offer on the subject. Regarding C#'s productivity coming mostly from
the IDE, I think that is only part of the picture. C# offers many  
high

level, productive features including garbage collection, classes,
exception handling, bounds checking, delegates, etc. while at the  
same
time offering excellent runtime speed. Those features aren't  
available

in C and some of them aren't even available in C++. C# is also better
designed and easier to use than Java primarily because it was  
designed

after Java as a better version of Java.

Python is still faster to crank out code with (and Ruby as well), but
both Python and Ruby are ridiculously slow. That will be a serious
problem if your application is CPU intensive and I believe any AGI
will be (though early exploratory programs may not).

One approach is to use two languages: Yahoo cranked out their
web-based mail site with Python so they could develop it quickly.  
Then

after it stabilized, they reimplemented it in C++ for performance. Of
course, it would be nice if one language could truly cover both. But
more on that at the end of this message.  :-)

Regarding the overhead of generating code in C#:
* Your AI app may or may not require code generation.
* Python runs so relatively slow that if you execute the generated
code repeatedly, the C# version of the app will still outperform it.

Btw I use WingIDE for Python and recommend it. (And of course VS  
2005 for C#.)


Having said all that--I get frustrated by these situations:
(1) I crank out my solution in Python in record time and then grow  
old

watching it execute.
(2) I watch my C# code fly at runtime, but it takes me 2-3 times
longer to write it.

Bleck!

So I'm working on a language that combines features from the two. It
targets the .NET platform so that it can leverage the work already
done on garbage collection, machine code, etc. as well as the  
numerous

third party tools and libraries. (Likewise for Novell Mono--the open
source clone of .NET.)

Cobra is currently at a late alpha stage. There are some docs
(including a comparison to Python) and examples. (And pardon my plain
looking web site, but I have no graphics skills.) Here it is:
http://cobralang.com/

It runs as fast as C# and codes almost as quick as Python. It also  
has

language level features for quality control, including contracts,
compile-time null checking and unit tests. These are found in neither
Python nor C# (but are found in some other languages).

Hey, we're on one of my favorite topics! Feel free to ask  
questions or

make comments.  :-)

-Chuck

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored

Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Aki Iskandar


Chuck, I looked at Cobra yesterday, and I like it :-)

I will try to get some time and play with it.  My love of Python, and  
reluctant admittance of appreciating .NET, are pointing me in the  
direction of using one of 3 languages:


In no particular oder:

1 - Python (CPython)
2 - IronPython
3 - Cobra

but I will also continue to explore Common Lisp as time permits ...  
its macros look promising ... but admittedly, it will take me some  
time to absorb the language - so for now, its regular Python,  
IronPython, or Yours (Cobra)!


One thing for sure though ... at least from my view ... Java and C++  
are just not good enough - when I consider several factors ...  
including productivity.   With the languages out there today, C++  
makes absolutely no sense.   Java is just not as good as .NET ... but  
this is because it came first, and was the .NET guinea pig.  Java was  
great before C# / .NET.


~Aki




On 18-Feb-07, at 12:29 PM, Chuck Esterbrook wrote:


On 2/18/07, Mark Waser [EMAIL PROTECTED] wrote:
Chuck is also absolutely incorrect that the only way to generate  
code by
code is to use Reflection.Emit.  It is very easy to have your code  
write
code in any language to a file (either real or virtual), compile  
it, and
then load the resulting library (real or virtual) anytime you want/ 
need it.


I'm not incorrect--because I never said that. Aki Iskandar brought
that issue up. Then I pointed out that .NET code executes much faster
than Python. I was not stating or implying that Reflection.Emit was
the only means to produce .NET code.

My Cobra compiler, for example, currently generates C# instead
bytecode for numerous advantages:
(a) faster bootstrapping (C# is higher level than bytecode)
(b) leverage the excellent bytecode generation of the C# compiler
(c) use C#'s error checking as an extra guard against deficiencies in
my pre-1.0 compiler

There is absolutely no run-time cost to this method (if you're  
keeping the
compiled code somewhere in your knowledge base) since you're  
dealing with

compiled code (as long as you know how to manage spawning and killing
threads and processes so that you don't keep nine million  
libraries loaded

that you'll never use again).


Well absolutely no run-time cost is a bit strong. Code generation
itself takes time, no matter what technique you use. And if you go the
generate source code route then writing it to disk, invoking a
compiler and linking it back in is a pretty slow process. I've looked
for a way to do it all in memory, but haven't found one. (You can
actually link in the C# compiler as a DLL so it's resident in your
process, but it's API still wants a disk-based file.)

But unless you're throwing away your generated code very quickly
without using it much (seems unlikely), you'll make up the difference
quite easily.

And even dynamically loading DLLs and managing how you use them,
unload them, etc. has *some* cost.

I also wouldn't sneer at using an established enterprise-class  
database to
serve as one or more of your core knowledge stores.  There is *a  
lot* of

...

You are absolutely...correct. I think the utility of existing database
servers is very underappreciated in academia and many AI researchers
are from academia or working on academia style projects (gov't
research grants or work to support research--not that there's anything
wrong with that!). But it's too bad as databases have a lot to offer.
Anyone, feel free to ask if you want me to expand.

The dumbest thing AGI researchers do is re-invent the wheel  
constantly when
isn't necessary.  I'm heartily with Richard Loosemoore and his  
call for
building a research infrastructure instead of all the walled  
gardens (with
long, low learning curves and horrible enhancement curves) that we  
have

currently.


Some reuse is easy. Fairly generic components like languages and
databases are easy to leverage on a project. After that, it gets very
difficult. Normally, something has be documented, be stable, run fast,
be on the same platform *and* be the right fit before it will be
adopted on a serious project.

Regarding platform, while you and I like .NET some people will reject
it because Microsoft (and the former Borland engineers they hired to
work on it), created it. I've talked to people who said they would use
it if it were open source. So I point them to Novell Mono (the open
source clone) at which point they claim they can't use it because
Microsoft will eventually shut Novell down. After I point out that
Microsoft submitted .NET as a published standard so that projects like
Novell Mono could take place, well... then it's on to the next excuse.

One legit excuse is that some people already have a huge investment in
other platforms (Java) and cannot turn that around in terms of time
and money. We're already fragmented.

...

dealing with a whole framework rather than just a language).  And, of
course, all of this ignore the ultimate trump that several flavors  
of LISP

are 

Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Mark Waser

[Aki]  This is by far too strong a statement - and most likely incorrect.


Don't play with most likelys.  Either disprove my statement or don't waste 
our time.



Mark, do you work at Microsoft?


No, but the question is irrelevant (as is your working at Microsoft --  
except so far as your believing that does prove something proves that your 
beliefs are questionable).


there are more reasons than time I have to elaborate why I can't agree 
with your statement.


So give us ONE!  Why are you wasting my attention if you won't back up your 
statements with verifiable facts?


And, from a practical programmatic way of having  code generate code, 
those are the only two ways.  The way you  mentioned - a text file - you 
still have to call the compiler (which  you can do through the above 
namespaces), but then you still have to  bring the dll into the same 
appdomain and process.  In short, it is a  huge performance hit, and in no 
way would seem to be a smooth  transition.


Spoken by a man who has clearly never tried it.  I have functioning code 
that does *exactly* what I outlined.  There is no perceptible delay when the 
program writes, compiles, links, starts a new thread, and executes the 
second piece of new code (the first piece generates a minor delay which I 
attribute to loading the compiler and other tools into memory).


Also, even if it *did* generate a delay, this function should happen often 
enough that it is a problem and there are numerous ways around the delay 
(multi-tasking, etc).


BTW - My apologies to Chuck for misattributing the quote.


- Original Message - 
From: Aki Iskandar [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, February 18, 2007 12:36 PM
Subject: **SPAM** Re: Languages for AGI [WAS Re: [agi] Priors and indefinite 
probabilities]





Before I comment on Mark's response, I think that the best comment on 
this email thread came from Pei, who wrote ...


quote
I guess you can see, from the replies so far, that what language
people choose is strongly influenced by their conception of AI. Since
people have very different opinions on what an AI is and what is the
best way to build it, it is natural that they selected different
languages, based mainly on its convenience for their concrete goal, or
even tried to invite new ones.

Therefore, I don't think there is a consensus on what the most
suitable language is for AI.
end quote

However, there was an upshot to all the replies to the original 
 question - which as with any emotionally charged discourse, there are 
nuggets of learnings  (I'm gaining insights into languages - thus  others 
have also learned things as well).


ok - now to breifly reply

[Mark]  Far and away, the best answer to the best language question  is 
the .NET framework.


[Aki]  This is by far too strong a statement - and most likely  incorrect. 
Mark, do you work at Microsoft?  I have, for 3 years (not  that it makes 
me a .NET expert by any means), and there are more  reasons than time I 
have to elaborate why I can't agree with your  statement.  Two of the 
nicest things about .NET are ADO.NET and  Reflection.  Java (which I think 
is not as strong or as pleasurable  to work with) has reflection.  But 
something that is readily  available for Java (and soon .NET - but not 
yet) object database  management systems (ODBMS) - which may be of better 
use than  traditional RDBMS - and if not, still much better than ADO.NET - 
from  a developers viewpoint when programming against a datastore.



Chuck is also absolutely incorrect that the only way to generate  code by 
code is to use Reflection.Emit.  It is very easy to have  your code write 
code in any language to a file (either real or  virtual), compile it, and 
then load the resulting library (real or  virtual) anytime you want/need 
it. There is absolutely no run-time  cost to this method (if you're 
keeping the compiled code somewhere  in your knowledge base) since you're 
dealing with compiled code



I'm the one that made that comment about Reflection.Emit - but I also 
included CodeDOM.  And, from a practical programmatic way of having  code 
generate code, those are the only two ways.  The way you  mentioned - a 
text file - you still have to call the compiler (which  you can do through 
the above namespaces), but then you still have to  bring the dll into the 
same appdomain and process.  In short, it is a  huge performance hit, and 
in no way would seem to be a smooth  transition.  THere would be lots and 
lots of hang time or waiting -  and if you did this often, its just 
completely impractical.  Any  execution speed advantages that .NET, in its 
compiled form, as  opposed to a comparatively slower runtime - such as 
Python for  example, is lost.  Way lost.


However, I completely agree with Mark's comment as to use existing 
technologies such as RDBMSs - and to not reinvent the wheel.  I know 
nothing about Novamente, and so this comment is not meant as  Novamente 
should

Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Aki Iskandar


Mark -

I don't know you, and have no bones to pick with you.  I have no  
bases, nor do I have motivations for doing so.


Picking a language is not a science - so to prove or test things,  
well ...


If you believe I'm wasting your time - don't bother reading - or  
replying to my posts.


I, as much as you (or anyone else on this thread / list) have the  
right to say what we like.  And by consequence, your email to me  
below - as inapropriate, and frankly childish, as it was - was well  
within your right.


My only comment is ... Stop taking things like attacks.  Get some  
thick skin.  Because in science, you need it.   And believe it or  
not, I am saying that out of respect to you.  Maybe you're having a  
bad day - we all do - but if anyone wastes time, it is people  
shouting at others.


Look at your email to me again.  Was this called for?  Look at your  
subsequent email to Eliezer.  Come on man. Lighten up a little.


Everyone else ... I apologize for taking your time to read this  
email.  I'm just hoping it'll make anyone from flaming people and  
calling them stupid.


Enough said.  I think we can all get along, and learn something from  
each other.


~Aki



On 18-Feb-07, at 1:21 PM, Mark Waser wrote:

[Aki]  This is by far too strong a statement - and most likely  
incorrect.


Don't play with most likelys.  Either disprove my statement or  
don't waste our time.



Mark, do you work at Microsoft?


No, but the question is irrelevant (as is your working at Microsoft  
--  except so far as your believing that does prove something  
proves that your beliefs are questionable).


there are more reasons than time I have to elaborate why I can't  
agree with your statement.


So give us ONE!  Why are you wasting my attention if you won't back  
up your statements with verifiable facts?


And, from a practical programmatic way of having  code generate  
code, those are the only two ways.  The way you  mentioned - a  
text file - you still have to call the compiler (which  you can do  
through the above namespaces), but then you still have to  bring  
the dll into the same appdomain and process.  In short, it is a   
huge performance hit, and in no way would seem to be a smooth   
transition.


Spoken by a man who has clearly never tried it.  I have functioning  
code that does *exactly* what I outlined.  There is no perceptible  
delay when the program writes, compiles, links, starts a new  
thread, and executes the second piece of new code (the first piece  
generates a minor delay which I attribute to loading the compiler  
and other tools into memory).


Also, even if it *did* generate a delay, this function should  
happen often enough that it is a problem and there are numerous  
ways around the delay (multi-tasking, etc).


BTW - My apologies to Chuck for misattributing the quote.


- Original Message - From: Aki Iskandar [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, February 18, 2007 12:36 PM
Subject: **SPAM** Re: Languages for AGI [WAS Re: [agi] Priors and  
indefinite probabilities]





Before I comment on Mark's response, I think that the best comment  
on this email thread came from Pei, who wrote ...


quote
I guess you can see, from the replies so far, that what language
people choose is strongly influenced by their conception of AI. Since
people have very different opinions on what an AI is and what is the
best way to build it, it is natural that they selected different
languages, based mainly on its convenience for their concrete  
goal, or

even tried to invite new ones.

Therefore, I don't think there is a consensus on what the most
suitable language is for AI.
end quote

However, there was an upshot to all the replies to the original   
question - which as with any emotionally charged discourse, there  
are nuggets of learnings  (I'm gaining insights into languages -  
thus  others have also learned things as well).


ok - now to breifly reply

[Mark]  Far and away, the best answer to the best language  
question  is the .NET framework.


[Aki]  This is by far too strong a statement - and most likely   
incorrect. Mark, do you work at Microsoft?  I have, for 3 years  
(not  that it makes me a .NET expert by any means), and there are  
more  reasons than time I have to elaborate why I can't agree with  
your  statement.  Two of the nicest things about .NET are ADO.NET  
and  Reflection.  Java (which I think is not as strong or as  
pleasurable  to work with) has reflection.  But something that is  
readily  available for Java (and soon .NET - but not yet) object  
database  management systems (ODBMS) - which may be of better use  
than  traditional RDBMS - and if not, still much better than  
ADO.NET - from  a developers viewpoint when programming against a  
datastore.



Chuck is also absolutely incorrect that the only way to generate   
code by code is to use Reflection.Emit.  It is very easy to have   
your code write code in any language to a file

Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Chuck Esterbrook

On 2/18/07, Aki Iskandar [EMAIL PROTECTED] wrote:

Chuck, I looked at Cobra yesterday, and I like it :-)


Glad to hear that.  :-)


I will try to get some time and play with it.  My love of Python, and
reluctant admittance of appreciating .NET, are pointing me in the
direction of using one of 3 languages:

In no particular oder:

1 - Python (CPython)
2 - IronPython
3 - Cobra

but I will also continue to explore Common Lisp as time permits ...
its macros look promising ... but admittedly, it will take me some
time to absorb the language - so for now, its regular Python,
IronPython, or Yours (Cobra)!


Thanks!

-Chuck

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Chuck Esterbrook

On 2/18/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:

Mark Waser wrote:

 Chuck is also absolutely incorrect that the only way to generate code by
 code is to use Reflection.Emit.  It is very easy to have your code write
 code in any language to a file (either real or virtual), compile it, and
 then load the resulting library (real or virtual) anytime you want/need
 it. There is absolutely no run-time cost to this method (if you're
 keeping the compiled code somewhere in your knowledge base) since you're
 dealing with compiled code (as long as you know how to manage spawning
 and killing threads and processes so that you don't keep nine million
 libraries loaded that you'll never use again).

Heh.  Why not work in C++, then, and write your own machine language?
No need to write files to disk, just coerce a pointer to a function
pointer.  I'm no Lisp fanatic, but this sounds more like a case of
Greenspun's Tenth Rule to me.


I find C++ overly complex while simultaneously lacking well known
productivity boosters including:
* garbage collection
* language level bounds checking
* contracts
* reflection / introspection (complete and portable)
* dynamic loading (portable)
* dynamic invocation

Having benefited from these in other languages such as Python and C#,
I'm not going back. Ever.

Regarding the machine code generation, I don't find it easy to do. The
Intel instruction and register set looks like an exercise in
obfuscation and frustration. RISC chips would be far easier, but I
don't think anyone is beating Intel/AMD at price/performance/power.
With .NET I can generate a fairly straightforward bytecode with
reasonable effort and leverage all the work Microsoft and Novell have
put into the arcane art of optimal machine code generation.


As Michael Wilson pointed out, only one thing is certain when it comes
to a language choice for FAI development:  If you build an FAI in
anything other than Lisp, numerous Lisp fanatics will spend the next
subjective century arguing that it would've been better to use Lisp.


Eliezer, do write code at the institute? What language do you use and
for what reasons? What do you like and dislike about it with respect
to your project? Just curious.

Best regards,

-Chuck

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Chuck Esterbrook

On 2/18/07, Aki Iskandar [EMAIL PROTECTED] wrote:

Enough said.  I think we can all get along, and learn something from
each other.


Oh, yeah??? Prove it!

LOL No, I'm totally kidding. I couldn't resist making that joke.  :-)

There are certainly a couple people on this list that take every
comment as an arguing point when in fact, some of our comments are
conversational, usually to provide context for subsequent points.

But please keep in mind that a statement like Do you work at
Microsoft? especially followed by I do can *easily* be taken the
wrong way even if you did not mean it that way.


Peace,

-Chuck

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Aki Iskandar


lol ... I enjoy your humor.

Good point on the Microsoft thing.  And you're right.  I certainly  
didn't mean it to be a snide remark.  When I used to work at  
Microsoft, I got tired of the Microsoft is king attitude - it was  
rampant - unfortunately.  So my comment was only contextual - the  
poster's comment Far and away, the best answer to the best language  
question is the .NET framework.  was very reminiscent of the  
Microsoft culture - that is the only reason I wrote it.


In fact, I made sure to claim that I was NOT a .NET expert.   
Microsoft was a proud moment in my life, but I'm glad its over.


But I agree.  The Microsoft comment could have, and may have been,  
taken the wrong way.  So, I am sorry if it sounded snooty.  I assure  
everyone that this was not my intension.


I've learned that the motivation / preference for selection of  
languages - for any domain, not just AI - are like belly buttons,  
everybody has one :-)


On another note, are you planning on an IDE for Cobra?  Can you write  
an extension for VS.NET, or for WingWare's Wing IDE?  How does one  
develop in Cobra?  Now and in the future.


Thanks Chuck





On 18-Feb-07, at 2:09 PM, Chuck Esterbrook wrote:


On 2/18/07, Aki Iskandar [EMAIL PROTECTED] wrote:

Enough said.  I think we can all get along, and learn something from
each other.


Oh, yeah??? Prove it!

LOL No, I'm totally kidding. I couldn't resist making that joke.  :-)

There are certainly a couple people on this list that take every
comment as an arguing point when in fact, some of our comments are
conversational, usually to provide context for subsequent points.

But please keep in mind that a statement like Do you work at
Microsoft? especially followed by I do can *easily* be taken the
wrong way even if you did not mean it that way.


Peace,

-Chuck

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Eliezer S. Yudkowsky

Chuck Esterbrook wrote:

On 2/18/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:


Heh.  Why not work in C++, then, and write your own machine language?
No need to write files to disk, just coerce a pointer to a function
pointer.  I'm no Lisp fanatic, but this sounds more like a case of
Greenspun's Tenth Rule to me.


I find C++ overly complex while simultaneously lacking well known
productivity boosters including:
* garbage collection
* language level bounds checking
* contracts
* reflection / introspection (complete and portable)
* dynamic loading (portable)
* dynamic invocation


I was being sarcastic, not advocating C++ as the One True AI language.


Eliezer, do write code at the institute? What language do you use and
for what reasons? What do you like and dislike about it with respect
to your project? Just curious.


I'm currently a theoretician.  My language-of-choice is Python for 
programs that are allowed to be slow.  C++ for number-crunching. 
Incidentally, back when I did more programming in C++, I wrote my own 
reflection package for it.  (In my defense, I was rather young at the time.)


B. Sheil once suggested that LISP excels primarily at letting you change 
your code after you realize that you wrote the wrong thing, and this is 
why LISP is the language of choice for AI work.  Strongly typed 
languages enforce boundaries between modules, and provide redundant 
constraints for catching bugs, which is helpful for coding conceptually 
straightforward programs.  But this same enforcement and redundancy 
makes it difficult to change the design of the program in midstream, for 
things that are not conceptually straightforward.  Sheil wrote in the 
1980s, but it still seems to me like a very sharp observation.


If you know in advance what code you plan on writing, choosing a 
language should not be a big deal.  This is as true of AI as any other 
programming task.


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Mark Waser

Aki,

   Picking a language, like any other choice, should be based upon 
articulable criteria (even if only because I enjoy writing in it more than 
anything else).


   Your e-mail(s) provide(d) no substance other than unsupported opinions 
(and incorrect facts).


   I called you on it (and provided supporting facts, criteria, and other 
info).  Instead of providing substance to refute me or continue a *useful* 
discussion, you continue down the path of no substance (whining about my 
e-mail rather than discussing or rebutting facts).


   Dude, develop the thick skin you referenced and play science the right 
way, with facts.


- Original Message - 
From: Aki Iskandar [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, February 18, 2007 1:45 PM
Subject: **SPAM** Re: Languages for AGI [WAS Re: [agi] Priors and indefinite 
probabilities]





Mark -

I don't know you, and have no bones to pick with you.  I have no  bases, 
nor do I have motivations for doing so.


Picking a language is not a science - so to prove or test things, 
well ...


If you believe I'm wasting your time - don't bother reading - or  replying 
to my posts.


I, as much as you (or anyone else on this thread / list) have the  right 
to say what we like.  And by consequence, your email to me  below - as 
inapropriate, and frankly childish, as it was - was well  within your 
right.


My only comment is ... Stop taking things like attacks.  Get some  thick 
skin.  Because in science, you need it.   And believe it or  not, I am 
saying that out of respect to you.  Maybe you're having a  bad day - we 
all do - but if anyone wastes time, it is people  shouting at others.


Look at your email to me again.  Was this called for?  Look at your 
subsequent email to Eliezer.  Come on man. Lighten up a little.


Everyone else ... I apologize for taking your time to read this  email. 
I'm just hoping it'll make anyone from flaming people and  calling them 
stupid.


Enough said.  I think we can all get along, and learn something from  each 
other.


~Aki



On 18-Feb-07, at 1:21 PM, Mark Waser wrote:

[Aki]  This is by far too strong a statement - and most likely 
incorrect.


Don't play with most likelys.  Either disprove my statement or  don't 
waste our time.



Mark, do you work at Microsoft?


No, but the question is irrelevant (as is your working at Microsoft  --  
except so far as your believing that does prove something  proves that 
your beliefs are questionable).


there are more reasons than time I have to elaborate why I can't  agree 
with your statement.


So give us ONE!  Why are you wasting my attention if you won't back  up 
your statements with verifiable facts?


And, from a practical programmatic way of having  code generate  code, 
those are the only two ways.  The way you  mentioned - a  text file - 
you still have to call the compiler (which  you can do  through the 
above namespaces), but then you still have to  bring  the dll into the 
same appdomain and process.  In short, it is a   huge performance hit, 
and in no way would seem to be a smooth   transition.


Spoken by a man who has clearly never tried it.  I have functioning  code 
that does *exactly* what I outlined.  There is no perceptible  delay when 
the program writes, compiles, links, starts a new  thread, and executes 
the second piece of new code (the first piece  generates a minor delay 
which I attribute to loading the compiler  and other tools into memory).


Also, even if it *did* generate a delay, this function should  happen 
often enough that it is a problem and there are numerous  ways around the 
delay (multi-tasking, etc).


BTW - My apologies to Chuck for misattributing the quote.


- Original Message - From: Aki Iskandar [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, February 18, 2007 12:36 PM
Subject: **SPAM** Re: Languages for AGI [WAS Re: [agi] Priors and 
indefinite probabilities]





Before I comment on Mark's response, I think that the best comment  on 
this email thread came from Pei, who wrote ...


quote
I guess you can see, from the replies so far, that what language
people choose is strongly influenced by their conception of AI. Since
people have very different opinions on what an AI is and what is the
best way to build it, it is natural that they selected different
languages, based mainly on its convenience for their concrete  goal, or
even tried to invite new ones.

Therefore, I don't think there is a consensus on what the most
suitable language is for AI.
end quote

However, there was an upshot to all the replies to the original 
question - which as with any emotionally charged discourse, there  are 
nuggets of learnings  (I'm gaining insights into languages -  thus 
others have also learned things as well).


ok - now to breifly reply

[Mark]  Far and away, the best answer to the best language  question  is 
the .NET framework.


[Aki]  This is by far too strong a statement - and most likely 
incorrect

Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Eugen Leitl
On Sun, Feb 18, 2007 at 09:51:45AM -0800, Eliezer S. Yudkowsky wrote:

 As Michael Wilson pointed out, only one thing is certain when it comes 
 to a language choice for FAI development:  If you build an FAI in 
 anything other than Lisp, numerous Lisp fanatics will spend the next 
 subjective century arguing that it would've been better to use Lisp.

All languages are shallow as far as AI is concerned, and only useful
to figure out the shape of the dedicated hardware for the target.
C-like things are more or less useful with meshed FPGA cores with
embedded RAM, but for a really minimalistic cellular architecture
C is also quite useless. However, C/MPI is very useful for running
a prototype on a large scale machine, with some 10^4..10^6 nodes.

It doesn't matter (much) which language you use in the initial prototype
phase, you will have to throw it away anyway.

Oh, and Python being slow: IronPython is .Net, and extending/expanding
Python for the prototype you do in C is the standard approach. 

A possible solution for those who're loath to touch hardware design: Erlang.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Jey Kottalam

You might want to consider the Boo programming language for a
Python-like language on .NET.
http://en.wikipedia.org/wiki/Boo_programming_language
http://boo.codehaus.org/

/offtopic

-Jey Kottalam

On 2/18/07, Aki Iskandar [EMAIL PROTECTED] wrote:


Chuck, I looked at Cobra yesterday, and I like it :-)

I will try to get some time and play with it.  My love of Python, and
reluctant admittance of appreciating .NET, are pointing me in the
direction of using one of 3 languages:

In no particular oder:

1 - Python (CPython)
2 - IronPython
3 - Cobra

but I will also continue to explore Common Lisp as time permits ...
its macros look promising ... but admittedly, it will take me some
time to absorb the language - so for now, its regular Python,
IronPython, or Yours (Cobra)!

One thing for sure though ... at least from my view ... Java and C++
are just not good enough - when I consider several factors ...
including productivity.   With the languages out there today, C++
makes absolutely no sense.   Java is just not as good as .NET ... but
this is because it came first, and was the .NET guinea pig.  Java was
great before C# / .NET.

~Aki




On 18-Feb-07, at 12:29 PM, Chuck Esterbrook wrote:

 On 2/18/07, Mark Waser [EMAIL PROTECTED] wrote:
 Chuck is also absolutely incorrect that the only way to generate
 code by
 code is to use Reflection.Emit.  It is very easy to have your code
 write
 code in any language to a file (either real or virtual), compile
 it, and
 then load the resulting library (real or virtual) anytime you want/
 need it.

 I'm not incorrect--because I never said that. Aki Iskandar brought
 that issue up. Then I pointed out that .NET code executes much faster
 than Python. I was not stating or implying that Reflection.Emit was
 the only means to produce .NET code.

 My Cobra compiler, for example, currently generates C# instead
 bytecode for numerous advantages:
 (a) faster bootstrapping (C# is higher level than bytecode)
 (b) leverage the excellent bytecode generation of the C# compiler
 (c) use C#'s error checking as an extra guard against deficiencies in
 my pre-1.0 compiler

 There is absolutely no run-time cost to this method (if you're
 keeping the
 compiled code somewhere in your knowledge base) since you're
 dealing with
 compiled code (as long as you know how to manage spawning and killing
 threads and processes so that you don't keep nine million
 libraries loaded
 that you'll never use again).

 Well absolutely no run-time cost is a bit strong. Code generation
 itself takes time, no matter what technique you use. And if you go the
 generate source code route then writing it to disk, invoking a
 compiler and linking it back in is a pretty slow process. I've looked
 for a way to do it all in memory, but haven't found one. (You can
 actually link in the C# compiler as a DLL so it's resident in your
 process, but it's API still wants a disk-based file.)

 But unless you're throwing away your generated code very quickly
 without using it much (seems unlikely), you'll make up the difference
 quite easily.

 And even dynamically loading DLLs and managing how you use them,
 unload them, etc. has *some* cost.

 I also wouldn't sneer at using an established enterprise-class
 database to
 serve as one or more of your core knowledge stores.  There is *a
 lot* of
 ...

 You are absolutely...correct. I think the utility of existing database
 servers is very underappreciated in academia and many AI researchers
 are from academia or working on academia style projects (gov't
 research grants or work to support research--not that there's anything
 wrong with that!). But it's too bad as databases have a lot to offer.
 Anyone, feel free to ask if you want me to expand.

 The dumbest thing AGI researchers do is re-invent the wheel
 constantly when
 isn't necessary.  I'm heartily with Richard Loosemoore and his
 call for
 building a research infrastructure instead of all the walled
 gardens (with
 long, low learning curves and horrible enhancement curves) that we
 have
 currently.

 Some reuse is easy. Fairly generic components like languages and
 databases are easy to leverage on a project. After that, it gets very
 difficult. Normally, something has be documented, be stable, run fast,
 be on the same platform *and* be the right fit before it will be
 adopted on a serious project.

 Regarding platform, while you and I like .NET some people will reject
 it because Microsoft (and the former Borland engineers they hired to
 work on it), created it. I've talked to people who said they would use
 it if it were open source. So I point them to Novell Mono (the open
 source clone) at which point they claim they can't use it because
 Microsoft will eventually shut Novell down. After I point out that
 Microsoft submitted .NET as a published standard so that projects like
 Novell Mono could take place, well... then it's on to the next excuse.

 One legit excuse is that some people already have a huge investment in
 

Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Charles D Hixson

Chuck Esterbrook wrote:

On 2/18/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:

Mark Waser wrote:
...


I find C++ overly complex while simultaneously lacking well known
productivity boosters including:
* garbage collection
* language level bounds checking
* contracts
* reflection / introspection (complete and portable)
* dynamic loading (portable)
* dynamic invocation

Having benefited from these in other languages such as Python and C#,
I'm not going back. Ever.
...
Best regards,

-Chuck
You might check out D ( http://www.digitalmars.com/d/index.html ).  Mind 
you, it's still in the quite early days, and missing a lot of libraries 
... which means you need to construct interfaces to the C versions.  
Still, it answers several of your objections, and has partial answers to 
at least one of the others.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Chuck Esterbrook

On 2/18/07, Charles D Hixson [EMAIL PROTECTED] wrote:

Chuck Esterbrook wrote:
 On 2/18/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
 Mark Waser wrote:
 ...

 I find C++ overly complex while simultaneously lacking well known
 productivity boosters including:
 * garbage collection
 * language level bounds checking
 * contracts
 * reflection / introspection (complete and portable)
 * dynamic loading (portable)
 * dynamic invocation

 Having benefited from these in other languages such as Python and C#,
 I'm not going back. Ever.
 ...
 Best regards,

 -Chuck
You might check out D ( http://www.digitalmars.com/d/index.html ).  Mind
you, it's still in the quite early days, and missing a lot of libraries
... which means you need to construct interfaces to the C versions.
Still, it answers several of your objections, and has partial answers to
at least one of the others.


Thanks for the suggestion. I cranked out lots of D for a few weeks and
overall it's a nice language. In fact, I was jealous to see my unit
testing as a language feature idea already implemented before I had a
chance to implement it myself.

D still isn't as high level as I'd like (think Python, Ruby) and it's
evolution felt painfully slow. It's also a language unto itself,
whereas I'm fan of using .NET/mono to get quick access to existing
libraries and tools. Oh yeah, and I could never get a debugger going.
Compounding that pain: there was no stack trace output for runtime
errors like there is for C# or Python.

All my D comments come with a big grain of salt because that was in
late 2005 that I checked it out.

I checked out Boo after that which also has some nice things going for
it, but also had various deficiencies I wasn't willing to live with
(or rework the code for).

Although Cobra is young, it's usable (I rewrote the compiler in Cobra
last fall) and not surprisingly, I'm especially happy it's choices in
various areas. :-)

It's full steam ahead. Okay, part-time steam-ahead since it's not my day job.

-Chuck

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Russell Wallace

On 2/18/07, Chuck Esterbrook [EMAIL PROTECTED] wrote:


You are absolutely...correct. I think the utility of existing database
servers is very underappreciated in academia and many AI researchers
are from academia or working on academia style projects (gov't
research grants or work to support research--not that there's anything
wrong with that!). But it's too bad as databases have a lot to offer.
Anyone, feel free to ask if you want me to expand.



Please do; it hadn't jumped out at me that commercial database systems are
suitable for AI work, but I'm not a database expert; I could well be
overlooking something.

Regarding platform, while you and I like .NET some people will reject

it because Microsoft (and the former Borland engineers they hired to
work on it), created it. I've talked to people who said they would use
it if it were open source. So I point them to Novell Mono (the open
source clone) at which point they claim they can't use it because
Microsoft will eventually shut Novell down. After I point out that
Microsoft submitted .NET as a published standard so that projects like
Novell Mono could take place, well... then it's on to the next excuse.



How well does Mono work? In particular, if I write a GUI-intensive program
in Visual C# and try to use Mono to run it on Linux, Solaris or whatever,
will it work entirely, or only mostly with a few glitches to work around, or
will the GUI part crash and burn with only the internal computation part
continuing to function? (I've heard people say the latter, but I haven't
tried it personally, and the question strikes me as relevant since while
Windows dominates the office desktop, Unix is a lot stronger in many
potential AI markets.)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Bob Mottram

I've seen the programming language merry-go-round on AI related forums too
many times to become embroiled, but for what it's worth I'm using C# /
.NET.  My master plan for robotic domination involves using Mono.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Samantha Atkins
Eugen Leitl wrote:
 On Sun, Feb 18, 2007 at 12:40:03AM -0800, Samantha Atkins wrote:

   
 Really?  I question whether you can get anywhere near the same level of
 reflection and true data - code equivalence in any other standard
 language.  I would think this capability might be very important
 especially to a Seed AI.
 

 snip..

 However, the AI school represented here seems to assume a seed AI (an 
 open-ended agent
 capable of directly extracting information from its environment) is 
 sufficiently simple
 to be specified by a team of human programmers, and implemented explictly by
 a team of human programmers. This type of approach is most clearest 
 represented
 by Cyc, which is sterile. 

Cyc was never intended to be a Seed AI to the best of my knowledge.  If
not it doesn't make a very clear case against seed AI.

 The reason is assumption that the internal architecture
 of human cognition is fully inspectable by human analyst introspection alone, 
 and 
 that furthermore the resulting extracted architecture is below the complexity 
 ceiling 
 accessible to a human team of programmers. I believe both assumptions are 
 incorrect.
   

I don't believe that any real intelligence will be reasonably
inspectable by human analysts.  As a working sofware geek these last
three decades or so I am quite aware of the limits of human
understanding of even perfectly mundane moderately large systems of
code.  I think the primary assumption with Seed AI is that humans can
put together something that has some small basis of generalizable
learning ability and the capacity to self improve from there.  That is
still a tall order but it doesn't require that humans are going to
understand the code very well, especially after an iteration or two.
 There are approaches which involve stochastical methods,
 information theory and evolutionary computation which appear potentially 
 fertile,
 though the details of the projects are hard to evaluate, since lacking 
 sufficient
 numbers of peer-reviewed publications, source code, or even interactive 
 demonstrations.
 Lisp does not particularly excel at these numerics-heavy applications, though 
 e.g.
 Koza used a subset of Lisp sexpr with reasonably good results. 
It is quite possible to write numerics-heavy applications in lisp where
needed that approach the speed of C.  With suitable declarations and
tuned code generation there is no reason for any significant gap. 
Unlike most languages such tuned subsystems can be created within the
language itself fairly seamlessly.   Among other things Lisp excels as
DSL  environment.

What I find problematic with Lisp is that it has been stuck in the
academic/specialist closet too long.  Python, for instance, has a far
greater wealth of libraries and glue for many tasks.  The Common Lisp
standard doesn't even specify a threading and IPC model.  Too much is
done differently in different implementations.   Too much has to be
created or reparented from the efforts of others in order to as
efficiently produce many types of practical systems.   That I have a
problem with.

- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Samantha Atkins
Mark Waser wrote:

 And, from a practical programmatic way of having  code generate code,
 those are the only two ways.  The way you  mentioned - a text file -
 you still have to call the compiler (which  you can do through the
 above namespaces), but then you still have to  bring the dll into the
 same appdomain and process.  In short, it is a  huge performance hit,
 and in no way would seem to be a smooth  transition.

 Spoken by a man who has clearly never tried it.  I have functioning
 code that does *exactly* what I outlined.  There is no perceptible
 delay when the program writes, compiles, links, starts a new thread,
 and executes the second piece of new code (the first piece generates a
 minor delay which I attribute to loading the compiler and other tools
 into memory).


I have tried it.  I was writing code and especially classes to files,
compiling and loading them into memory back in the mid 80s.  There is no
way that opening a file, writing the code to it, closing the file,
invoking another process or several to compile and link it and still
another file I/O set to load it is going to be of no real performance
cost.  There is also no way it will outperform creating code directly in
a language tuned for it in memory and immediately evaluating it with or
without JIT machine code generation.  #Net is optimized for certain
stack based classes of languages.  Emulating other types of languages on
top of it is not going to be as efficient as implementing them closer to
the hardware.  If the IDL allowed creating a broader class of VMs than
it apparently does I would be much more interested.

 Also, even if it *did* generate a delay, this function should happen
 often enough that it is a problem and there are numerous ways around
 the delay (multi-tasking, etc).

How would it help you that much to do a bunch of context switching or
IPC on top of the original overhead?

- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Samantha Atkins
Eliezer S. Yudkowsky wrote:


 If you know in advance what code you plan on writing, choosing a
 language should not be a big deal.  This is as true of AI as any other
 programming task.


It is still a big deal.  You want to chose a language that allows you to
express your intent as concisely and clearly as possible with a minimum
of language choice induced overhead.  Ideally you want a language that
actually helps you sharpen your thoughts as you express them.  You want
the result to run at reasonable speed and to be maintainable over time. 
Almost never do you know fully not only what you plan on writing but
what it will need to also handle an iteration or two down the road. You
learn what kind of flexibility to build in to help with inevitable
change.  But the choice of programming language can make a very large
difference in how easy it is to create and maintain that. 

- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Priors and indefinite probabilities

2007-02-18 Thread Ben Goertzel

Aki Iskandar wrote:


Hello -

I'm new on this email list.  I'm very interested in AI / AGI - but do 
not have any formal background at all.  I do have a degree in Finance, 
and have been a professional consultant / developer for the last 9 
years (including having worked at Microsoft for almost 3 of those years).


I am extremely happy to see that there are people out there that 
believe AGI will become a reality - I share the same belief.  Most, to 
all, of my colleagues see AI as never becoming a reality.  Some that 
do see intelligent machines becoming a reality - believe that it is 
hardware, not software, that will make it so.  I believe the opposite 
... in that the key is in the software - the hardware we have today is 
ample.


The reason I'm writing is that I am curious (after watching a couple 
of the videos on google linked off of Ben's site) as to why you're 
using C++ instead of other languages, such as C#, Java, or Python.  
The later 2, and others, do the grunt work of cleaning up resources - 
thus allowing for more time to work on the problem domain, as well as 
saving time in compiling, linking, and debugging.


I'm not questioning your decision - I'm merely curious to learn about 
your motivations for selecting C++ as your language of choice.




The Novamente AI system is designed to run efficiently on SMP 
multiprocessor machines, using large amounts of RAM (as many gigabytes 
as the machine will support), and requiring complex and customized 
patterns of garbage collection.  The automated GC supplied by languages 
like Java or C# will not do the trick.  C++ is the only language that 
has been intensively battle-tested under this kind of scenario.  (In 
principle, C# could be used, with copious use of unsafe code blocks, but 
it has not been intensively tested in this kind of scenario.)


C++ is a large language that can be used in many different ways.  Early 
Novamente code was somewhat C-ish and is gradually being replaced.  New 
Novamente code makes heavy use of STL, generic design patterns, and the 
Boost library, which is a more elegant C++ dialect.  STL and Boost do a 
lot of the gruntwork for you too, although they're not as simple to use 
as Java or Python, of course.


I personally love the Ruby language, and have prototyped some Novamente 
stuff in Ruby prior to its incorporation in the main C++ codebase.  But 
Ruby is really slow and can't handle complex GC situations.




-- Ben G



Thanks,
~Aki


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Priors and indefinite probabilities

2007-02-18 Thread Aki Iskandar


Thanks Ben - this makes complete sense, and you've answered my  
question precisely.


~Aki


On 19-Feb-07, at 1:03 AM, Ben Goertzel wrote:


Aki Iskandar wrote:


Hello -

I'm new on this email list.  I'm very interested in AI / AGI - but  
do not have any formal background at all.  I do have a degree in  
Finance, and have been a professional consultant / developer for  
the last 9 years (including having worked at Microsoft for almost  
3 of those years).


I am extremely happy to see that there are people out there that  
believe AGI will become a reality - I share the same belief.   
Most, to all, of my colleagues see AI as never becoming a  
reality.  Some that do see intelligent machines becoming a reality  
- believe that it is hardware, not software, that will make it  
so.  I believe the opposite ... in that the key is in the software  
- the hardware we have today is ample.


The reason I'm writing is that I am curious (after watching a  
couple of the videos on google linked off of Ben's site) as to why  
you're using C++ instead of other languages, such as C#, Java, or  
Python.  The later 2, and others, do the grunt work of cleaning up  
resources - thus allowing for more time to work on the problem  
domain, as well as saving time in compiling, linking, and debugging.


I'm not questioning your decision - I'm merely curious to learn  
about your motivations for selecting C++ as your language of choice.




The Novamente AI system is designed to run efficiently on SMP  
multiprocessor machines, using large amounts of RAM (as many  
gigabytes as the machine will support), and requiring complex and  
customized patterns of garbage collection.  The automated GC  
supplied by languages like Java or C# will not do the trick.  C++  
is the only language that has been intensively battle-tested under  
this kind of scenario.  (In principle, C# could be used, with  
copious use of unsafe code blocks, but it has not been intensively  
tested in this kind of scenario.)


C++ is a large language that can be used in many different ways.   
Early Novamente code was somewhat C-ish and is gradually being  
replaced.  New Novamente code makes heavy use of STL, generic  
design patterns, and the Boost library, which is a more elegant C++  
dialect.  STL and Boost do a lot of the gruntwork for you too,  
although they're not as simple to use as Java or Python, of course.


I personally love the Ruby language, and have prototyped some  
Novamente stuff in Ruby prior to its incorporation in the main C++  
codebase.  But Ruby is really slow and can't handle complex GC  
situations.




-- Ben G



Thanks,
~Aki


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Priors and indefinite probabilities

2007-02-17 Thread Aki Iskandar


Hello -

I'm new on this email list.  I'm very interested in AI / AGI - but do  
not have any formal background at all.  I do have a degree in  
Finance, and have been a professional consultant / developer for the  
last 9 years (including having worked at Microsoft for almost 3 of  
those years).


I am extremely happy to see that there are people out there that  
believe AGI will become a reality - I share the same belief.  Most,  
to all, of my colleagues see AI as never becoming a reality.  Some  
that do see intelligent machines becoming a reality - believe that it  
is hardware, not software, that will make it so.  I believe the  
opposite ... in that the key is in the software - the hardware we  
have today is ample.


The reason I'm writing is that I am curious (after watching a couple  
of the videos on google linked off of Ben's site) as to why you're  
using C++ instead of other languages, such as C#, Java, or Python.   
The later 2, and others, do the grunt work of cleaning up resources -  
thus allowing for more time to work on the problem domain, as well as  
saving time in compiling, linking, and debugging.


I'm not questioning your decision - I'm merely curious to learn about  
your motivations for selecting C++ as your language of choice.


Thanks,
~Aki


On 15-Feb-07, at 12:42 PM, Ben Goertzel wrote:


gts wrote:
On Thu, 15 Feb 2007 12:21:22 -0500, Ben Goertzel  
[EMAIL PROTECTED] wrote:


As I see it, science is about building **collective** subjective  
understandings among a group of rational individuals coping with  
a shared environment


That is consistent with the views of de Finetti and other  
subjectivists. In their view our posteriors all converge in the  
end anyway, so it shouldn't matter if there are no 'objective'  
probabilities.


Which I note is highly consistent with Charles Peirce's philosophy  
of science, articulated at the end of the 1800's ...


So none of this is very new ;-)

ben




However, my view is not the most common one, I would suppose...


I'm quite sure you're correct about that.

A minority subjectivist, attempting to communicating his bayesian  
conclusions to an non-subjectivist colleague in the majority,  
could be met with the disconcerting response that his numbers are  
mere statements about his psychology. :/ Thus there exists a  
strong disincentive to be subjectivist in the natural sciences, no  
matter the philosophical consequences.


-gts




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-17 Thread Richard Loosemore

Aki Iskandar wrote:


Hello -

I'm new on this email list.  I'm very interested in AI / AGI - but do 
not have any formal background at all.  I do have a degree in Finance, 
and have been a professional consultant / developer for the last 9 years 
(including having worked at Microsoft for almost 3 of those years).


I am extremely happy to see that there are people out there that believe 
AGI will become a reality - I share the same belief.  Most, to all, of 
my colleagues see AI as never becoming a reality.  Some that do see 
intelligent machines becoming a reality - believe that it is hardware, 
not software, that will make it so.  I believe the opposite ... in that 
the key is in the software - the hardware we have today is ample.


The reason I'm writing is that I am curious (after watching a couple of 
the videos on google linked off of Ben's site) as to why you're using 
C++ instead of other languages, such as C#, Java, or Python.  The later 
2, and others, do the grunt work of cleaning up resources - thus 
allowing for more time to work on the problem domain, as well as saving 
time in compiling, linking, and debugging.


I'm not questioning your decision - I'm merely curious to learn about 
your motivations for selecting C++ as your language of choice.


Thanks,
~Aki


It is not always true that C++ is used (I am building my own language 
and development environment to do it, for example), but if C++ is most 
common in projects overall, that probably reflects the facts that:


(a) it is most widely known, and
(b) for many projects, it does not hugely matter which language is used.

Frankly, I think most people choose the language they are already most 
familiar with.  There just don't happen to be any Cobol-trained AI 
researchers ;-).


Back in the old days, it was different.  Lisp and Prolog, for example, 
represented particular ways of thinking about the task of building an 
AI.  The framework for those paradigms was strongly represented by the 
language itself.



Richard Loosemore.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-17 Thread Chuck Esterbrook

On 2/17/07, Richard Loosemore [EMAIL PROTECTED] wrote:

It is not always true that C++ is used (I am building my own language
and development environment to do it, for example), but if C++ is most
common in projects overall, that probably reflects the facts that:

...

Back in the old days, it was different.  Lisp and Prolog, for example,
represented particular ways of thinking about the task of building an
AI.  The framework for those paradigms was strongly represented by the
language itself.


What is the nature of your language and development environment? Is it
in the same neighborhood as imperative OO languages such as Python and
Java? Or something different like Prolog?

What about the development environment?

-Chuck

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-17 Thread Eugen Leitl
On Sat, Feb 17, 2007 at 08:46:17AM -0800, Peter Voss wrote:

 We use .net/ c#, and are very happy with our choice. Very productive.

I don't know much about those. Bytecode, JIT at runtime? Might be not
too slow. If you use code generation, do you do it at source or at bytecode 
level?
 
 Eugen(Of course AI is a massively parallel number-crunching application...
 
 Disagree.

That it is massively parallel, or number-crunching? Or neither 
massively-parallel,
nor number-crunching?

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


RE: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-17 Thread Peter Voss
Dynamic code generation is not a major aspect of our AGI.

To clarify: While I agree that many AI apps require massively parallel
number-crunching, in our AGI approach neither are major requirements.
'Number crunching' is of course part of any serious AI/AGI implementation,
but we find that (software) design is by far the more important bottleneck.


-Original Message-
From: Eugen Leitl [mailto:[EMAIL PROTECTED] 
Sent: Saturday, February 17, 2007 8:50 AM
To: agi@v2.listbox.com
Subject: Re: Languages for AGI [WAS Re: [agi] Priors and indefinite
probabilities]

On Sat, Feb 17, 2007 at 08:46:17AM -0800, Peter Voss wrote:

 We use .net/ c#, and are very happy with our choice. Very productive.

I don't know much about those. Bytecode, JIT at runtime? Might be not
too slow. If you use code generation, do you do it at source or at bytecode
level?
 
 Eugen(Of course AI is a massively parallel number-crunching
application...
 
 Disagree.

That it is massively parallel, or number-crunching? Or neither
massively-parallel,
nor number-crunching?

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-17 Thread Aki Iskandar


I completely agree with you Pei.  Language choice is all over the  
place, and for differing reasons / views.


I didn't intend on having people spend so many cycles in offering  
their input.  But it sure is a testament to how friendly, and  
passionate about AI, the people on this list are :-)


If I can ask two quick questions, I'll get busy with following the  
suggestions :-)


1 - Of the many branches of mathematics, which is best as a starting  
point?  Calculus? Linear Algebra? Statistics? Other ...


2 - What advice can you give to an AI newbie as to a program to write  
as the first one?  In other words, what puzzle of proof would you  
suggest that he program the computer to solve?


Thanks again everyone,
~Aki



On 17-Feb-07, at 1:41 PM, Pei Wang wrote:


Aki,

I guess you can see, from the replies so far, that what language
people choose is strongly influenced by their conception of AI. Since
people have very different opinions on what an AI is and what is the
best way to build it, it is natural that they selected different
languages, based mainly on its convenience for their concrete goal, or
even tried to invite new ones.

Therefore, I don't think there is a consensus on what the most
suitable language is for AI.

Pei

On 2/17/07, Aki Iskandar [EMAIL PROTECTED] wrote:

Thanks Pei.

I didn't mean for it to be a blanket statement.  I was just surprised
at all the different preferences, so it seemed like language didn't
matter that much.  I would imaging that a healthy portion of people
on this list have a PhD - so clearly there are other factors in
language selection than just familiarity with the language - I was
just curious to learn about some if the factors - since they would
help my understanding of some of the challenges that lie ahead.

I'm in that boat - not a PhD, but was looking for a language more
suited for AI than sticking with my most familiar language (C#) -
and, for the moment anyway, settled on Python.   Prolog, LISP, and
LISP subsets such as Scheme, are traditional AI languages, but I
found that LISP takes a lot of getting used to - more time that I
have - to get proficient enough with it to the point where I can
write interesting stuff.  Python came naturally - and seems more
flexible than C#.

What I found really interesting is that there is someone in this
group that is creating his own language to solve the AI puzzle.
Given the time it takes to create a language, this tells me that
there were too many drawabcks / limitations in using an existing
language.

Regards,
~Aki

On 17-Feb-07, at 1:09 PM, Pei Wang wrote:

 On 2/17/07, Aki Iskandar [EMAIL PROTECTED] wrote:

 What, to
 me - as a complete novice to AI - seems counterintuitive in  
language
 selection, is that the pros and cons of each language come  
second, as

 a factor of selection, to familiarity.

 That conclusion is probably too strong. At least in my case,  
each time

 I switched from a more familiar language to a less familiar one,
 because of some other reasons.

 Pei

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-17 Thread Pei Wang

On 2/17/07, Aki Iskandar [EMAIL PROTECTED] wrote:


If I can ask two quick questions, I'll get busy with following the
suggestions :-)


They are even more controversial than your previous question. ;-)


1 - Of the many branches of mathematics, which is best as a starting
point?  Calculus? Linear Algebra? Statistics? Other ...


I would say Mathematical Logic and Probability Theory. Even if you
(like me) don't think they are the right tools for AI, you still need
to know them to understand the previous attempts. Calculus and Linear
Algebra are much less relevant.


2 - What advice can you give to an AI newbie as to a program to write
as the first one?  In other words, what puzzle of proof would you
suggest that he program the computer to solve?


I don't think it is a good idea to start problem-specific coding
before briefly browsing the existing approaches towards AI. However,
if you just want to get some first-hand experience while checking out
other people's ideas, a simple learning program may be fun to code,
though I don't have any concrete recommendation now.

Pei


Thanks again everyone,
~Aki



On 17-Feb-07, at 1:41 PM, Pei Wang wrote:

 Aki,

 I guess you can see, from the replies so far, that what language
 people choose is strongly influenced by their conception of AI. Since
 people have very different opinions on what an AI is and what is the
 best way to build it, it is natural that they selected different
 languages, based mainly on its convenience for their concrete goal, or
 even tried to invite new ones.

 Therefore, I don't think there is a consensus on what the most
 suitable language is for AI.

 Pei

 On 2/17/07, Aki Iskandar [EMAIL PROTECTED] wrote:
 Thanks Pei.

 I didn't mean for it to be a blanket statement.  I was just surprised
 at all the different preferences, so it seemed like language didn't
 matter that much.  I would imaging that a healthy portion of people
 on this list have a PhD - so clearly there are other factors in
 language selection than just familiarity with the language - I was
 just curious to learn about some if the factors - since they would
 help my understanding of some of the challenges that lie ahead.

 I'm in that boat - not a PhD, but was looking for a language more
 suited for AI than sticking with my most familiar language (C#) -
 and, for the moment anyway, settled on Python.   Prolog, LISP, and
 LISP subsets such as Scheme, are traditional AI languages, but I
 found that LISP takes a lot of getting used to - more time that I
 have - to get proficient enough with it to the point where I can
 write interesting stuff.  Python came naturally - and seems more
 flexible than C#.

 What I found really interesting is that there is someone in this
 group that is creating his own language to solve the AI puzzle.
 Given the time it takes to create a language, this tells me that
 there were too many drawabcks / limitations in using an existing
 language.

 Regards,
 ~Aki

 On 17-Feb-07, at 1:09 PM, Pei Wang wrote:

  On 2/17/07, Aki Iskandar [EMAIL PROTECTED] wrote:
 
  What, to
  me - as a complete novice to AI - seems counterintuitive in
 language
  selection, is that the pros and cons of each language come
 second, as
  a factor of selection, to familiarity.
 
  That conclusion is probably too strong. At least in my case,
 each time
  I switched from a more familiar language to a less familiar one,
  because of some other reasons.
 
  Pei
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?list_id=303

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=303


 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-17 Thread Chuck Esterbrook

On 2/17/07, Aki Iskandar [EMAIL PROTECTED] wrote:

Richard, Danny, Pei, Chuck, Eugen, Peter ... thanks all for answering
my question.

...

C# is definitely a productive language, mainly due to the IDE, and it
is faster than Java - however, it is strongly typed.
Perhaps the disadvantage to C#, form my perspective, is that the only
ways to generate code (by code) is by using Reflection.Emit, and
CodeDOM namespaces.  However, the performance hit is fr to costly
to run it - because it has to be compiled (to MSIL / bytecode) and
then the class type has to be loaded, and only then interperated / run.

Java suffers the same fate, and is slower than C#.

Python is a duck typed language, and has very rich flexibility when
designing datastructures.  In addition, it has a few ways to evaluate
code on the fly (enabling code that writes code).


I've cranked out mounds of Python and C#, so I have a few things to
offer on the subject. Regarding C#'s productivity coming mostly from
the IDE, I think that is only part of the picture. C# offers many high
level, productive features including garbage collection, classes,
exception handling, bounds checking, delegates, etc. while at the same
time offering excellent runtime speed. Those features aren't available
in C and some of them aren't even available in C++. C# is also better
designed and easier to use than Java primarily because it was designed
after Java as a better version of Java.

Python is still faster to crank out code with (and Ruby as well), but
both Python and Ruby are ridiculously slow. That will be a serious
problem if your application is CPU intensive and I believe any AGI
will be (though early exploratory programs may not).

One approach is to use two languages: Yahoo cranked out their
web-based mail site with Python so they could develop it quickly. Then
after it stabilized, they reimplemented it in C++ for performance. Of
course, it would be nice if one language could truly cover both. But
more on that at the end of this message.  :-)

Regarding the overhead of generating code in C#:
* Your AI app may or may not require code generation.
* Python runs so relatively slow that if you execute the generated
code repeatedly, the C# version of the app will still outperform it.

Btw I use WingIDE for Python and recommend it. (And of course VS 2005 for C#.)

Having said all that--I get frustrated by these situations:
(1) I crank out my solution in Python in record time and then grow old
watching it execute.
(2) I watch my C# code fly at runtime, but it takes me 2-3 times
longer to write it.

Bleck!

So I'm working on a language that combines features from the two. It
targets the .NET platform so that it can leverage the work already
done on garbage collection, machine code, etc. as well as the numerous
third party tools and libraries. (Likewise for Novell Mono--the open
source clone of .NET.)

Cobra is currently at a late alpha stage. There are some docs
(including a comparison to Python) and examples. (And pardon my plain
looking web site, but I have no graphics skills.) Here it is:
http://cobralang.com/

It runs as fast as C# and codes almost as quick as Python. It also has
language level features for quality control, including contracts,
compile-time null checking and unit tests. These are found in neither
Python nor C# (but are found in some other languages).

Hey, we're on one of my favorite topics! Feel free to ask questions or
make comments.  :-)

-Chuck

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Priors and indefinite probabilities

2007-02-15 Thread gts

On Wed, 14 Feb 2007 18:03:41 -0500, Ben Goertzel [EMAIL PROTECTED] wrote:

Indeed, that is a cleaner and simpler argument than the various more  
concrete PI paradoxes... (wine/water, etc.)


Yes.

It seems to show convincingly that the PI cannot be consistently applied  
across the board, but can be heuristically applied to certain cases but  
not others as judged contextually appropriate.


Cox addresses exactly what sort of cases in which it might be legitimately  
applied, and they are in his view rare and exceptional.


Such cases exist for example in certain games of chance in which the  
necessary conditions for applying the PI are prescribed by the rules of  
the game or result from the design of the equipment.


Those necessary conditions are in fact what the PI asks us to assume: not  
only must the possibilities be mutually exclusive and exhaustive, but they  
must also be *known a priori to be equiprobable*.


We can say with confidence for example that each card in a shuffled deck  
is equally likely, but this is because in this trivial case  
equiprobability is prescribed by the rules of the game or result from the  
design of the equipment. The rest of the world is seldom so accommodating.


The principle asks us to assume equiprobability when we have no a priori  
evidence of equiprobability -- that is its very function. So one might  
ask: what good is the PI if it can be invoked only when the possibilities  
are known a priori to be equiprobable?


Cox writes of it only in a rhetorical sense, as if to say, You can invoke  
the PI but only if you already know that which it prescribes is true.


-gts



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Priors and indefinite probabilities

2007-02-15 Thread gts

LEADING TO THE ONLY THING REALLY INTERESTING ABOUT THIS DISCUSSION:


What interests me is that the Principle of Indifference is taken for  
granted by so many people as a logical truth when in reality it is  
fraught with logical difficulties.


Gillies (2000) makes an analogy between the situation in probability  
theory concerning the Principle of Indifference and the situation that  
once existed in set theory concerning the Axiom of Comprehension.


Like the Principle of Indifference, the Axiom of Comprehension seemed  
logical and intuitively obvious. That axiom states that all things which  
share a property form a set. What could be more logical and intuitively  
obvious? But the Axiom of Comprehension led to the Russell Paradox, and a  
crisis in set theory.


Similarly the Principle of Indifference (and its predecessor the Principle  
of Insufficient Reason) led to numerous difficulties, (e.g., the Bertrand  
Paradoxes, and arguments such as Cox's). Subsequently we saw a schism in  
probability theory. The classical theory was discredited, including the  
classical interpretation of Bayes' Theorem, and replaced with at least  
four different alternative interpretations.


Among bayesians, one might say De Finetti and Ramsey and the subjectivists  
helped rescue bayesianism from the jaws of (philosophical) death, by  
separating bayesianism from that albatross around its neck which is the  
Principle of Indifference.


-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Priors and indefinite probabilities

2007-02-15 Thread Ben Goertzel

gts wrote:

LEADING TO THE ONLY THING REALLY INTERESTING ABOUT THIS DISCUSSION:


What interests me is that the Principle of Indifference is taken for 
granted by so many people as a logical truth when in reality it is 
fraught with logical difficulties.


I think it's been a pretty long time since the PI was taken by any
serious thinkers as a logical truth, though...

What it is, is a heuristic principle, which can be applied in a number
of ways to any given situation

The connection of the PI with entropy is interesting, in that it
highlights the subjectivity of entropy.  To calculate the entropy
information-theoretically, one needs to partition the state space of the
system being measured.  Different partitions could lead to different
answers.  So, entropy exists subjectively relative to a certain
observer, who takes a certain coarse-grained view of the state space.

This is consistent with how assuming PI with respect to different
partitions of the state space (a vs. ~a, b vs. ~b) can lead to different
answers --- the PI being a special case of entropy maximization.

Philosophically, this is similar to how the Occam prior depends on the
model of computation under assumption.

So, in Zurek's formulation

Physical Entropy = Statistical Entropy + Algorithmic Entropy

the first term is subjective due to dependence on a partition of state
space, and the second term is subjective
due to dependence on a choice of universal computer.

And that's just the way it is  But, this is all basically old stuff,
and I'm not sure why it requires so much discussion
at this point!

-- Ben


Gillies (2000) makes an analogy between the situation in probability 
theory concerning the Principle of Indifference and the situation that 
once existed in set theory concerning the Axiom of Comprehension.


Like the Principle of Indifference, the Axiom of Comprehension seemed 
logical and intuitively obvious. That axiom states that all things 
which share a property form a set. What could be more logical and 
intuitively obvious? But the Axiom of Comprehension led to the Russell 
Paradox, and a crisis in set theory.


Similarly the Principle of Indifference (and its predecessor the 
Principle of Insufficient Reason) led to numerous difficulties, (e.g., 
the Bertrand Paradoxes, and arguments such as Cox's). Subsequently we 
saw a schism in probability theory. The classical theory was 
discredited, including the classical interpretation of Bayes' Theorem, 
and replaced with at least four different alternative interpretations.


Among bayesians, one might say De Finetti and Ramsey and the 
subjectivists helped rescue bayesianism from the jaws of 
(philosophical) death, by separating bayesianism from that albatross 
around its neck which is the Principle of Indifference.


-gts





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Priors and indefinite probabilities

2007-02-15 Thread gts

On Thu, 15 Feb 2007 11:21:25 -0500, Ben Goertzel [EMAIL PROTECTED] wrote:


I think it's been a pretty long time since the PI was taken by any
serious thinkers as a logical truth, though...


Objective bayesianism stands or falls (vs subjective bayesianism) on this  
question of whether the PI is a valid logical principle. And as far as I  
can tell objective bayesians certainly try to defend it as such. The PI is  
a main tenet of objective bayesianism; perhaps even its defining  
characteristic.


Concerning physical entropy, the PI works well as a heuristic in certain  
related applications relevant to the physical sciences, which is why some  
physicists such as Jaynes were so fond of it. (Interestingly, though, Cox  
is a physicist and he is apparently not so fond of it.)


Jaynes points out accurately that physicists have used the PI on numerous  
occasions to make accurate predictions, but Gillies points out that this  
heuristic success in no way proves the PI as a logical principle; if that  
were true then no empirical measurements would be needed to establish the  
veracity of their related hypotheses.


One might ask why objective bayesianism is still attractive to many. This  
I think is a very interesting question. I believe it has something to do  
with the sociology of science, where pragmatic considerations often take  
precedence over philosophy. Scientists, especially natural scientists,  
have a strong need to communicate mathematical ideas in an objective  
manner. Objective bayesianism offers the hope that a scientist can show  
his colleagues that a hypothesis is true at some *objective* level of  
credibility. That hope of objectivity is not present under subjective  
bayesianism, even if subjective bayesianism might have a more solid  
philosophical footing.


For the same reason I think it's still true that most natural scientists  
eschew bayesianism whenever possible, preferring to think and communicate  
in terms of objectivist interpretations.


-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Priors and indefinite probabilities

2007-02-15 Thread gts

On Thu, 15 Feb 2007 12:21:22 -0500, Ben Goertzel [EMAIL PROTECTED] wrote:

As I see it, science is about building **collective** subjective  
understandings among a group of rational individuals coping with a  
shared environment


That is consistent with the views of de Finetti and other subjectivists.  
In their view our posteriors all converge in the end anyway, so it  
shouldn't matter if there are no 'objective' probabilities.



However, my view is not the most common one, I would suppose...


I'm quite sure you're correct about that.

A minority subjectivist, attempting to communicating his bayesian  
conclusions to an non-subjectivist colleague in the majority, could be met  
with the disconcerting response that his numbers are mere statements about  
his psychology. :/ Thus there exists a strong disincentive to be  
subjectivist in the natural sciences, no matter the philosophical  
consequences.


-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Priors and indefinite probabilities

2007-02-15 Thread gts

So none of this is very new ;-)


No. :)

Also your idea of collective subjective understandings sounds similar to  
something I read about an 'inter-subjective' interpretation of probability  
theory, which purports to stand somewhere between objective bayesianism  
and subjective bayesianism. Lots of people with different ideas...


By the way, did Lakatos take a stand on these questions? I.e., did he  
endorse any particular interpretation separate from any observations he  
may have made about their development?


PS I've been getting multiple copies of your posts. Not sure if the  
problem is here or there but thought I would bring it to your attention.


-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Priors and indefinite probabilities

2007-02-14 Thread gts

Tying together recent threads on indefinite probabilities and prior
distributions (PI, maxent, Occam)...


For those who might not know, the PI (the principle of indifference)  
advises us, when confronted with n mutually exclusive and exhaustive  
possibilities, to assign probabilities of 1/n to each of them.


In his book _The Algebra of Probable Inference_, R.T. Cox presents a  
convincing disproof of the PI when n = 2. I'm confident his argument  
applies for greater values of n, though of course the formalism would be  
more complicated.


His argument is by reductio ad absurdum; Cox shows that the PI leads to an  
absurdity. (Not just an absurdity in his view, but a monstrous absurdity  
:-)


The following quote is verbatim from his book, except that in the interest  
of clarity I have used the symbol  to mean and instead of the dot  
used by Cox. The symbol v means or in the sense of and/or.


Also there is an axiom used in the argument, referred to as Eq. (2.8 I).  
That axiom is


(a v ~a)  b = b.

Cox writes, concerning two mutually exclusive and exhaustive propositions  
a and b...

==
...it is supposed that

a | a v ~a = 1/2

for arbitrary meanings of a.

In disproof of this supposition, let us consider the probability of the  
conjunction a  b on each of the two hypotheses, a v ~a and b v ~b. We have


a v b | a v ~a = (a | a v ~a)[b | (a v ~a)  a]

By Eq (2.8 I) (a v ~a)  a = a and therefore

a  b | a v ~a = (a | a v ~a) (b | a)

Similarly

a  b | b v ~b = (b | b v ~b) (a | b)

But, also by Eq. (2.8 I), a v ~a and b v ~b are each equal to (a v ~a)   
(b v ~b) and each is therefore equal to the other.


Thus

a  b | b v ~b = a  b | a v ~a

and hence

(a | a v ~a) (b | a) = (b | b v ~b) (a | b)

If then a | a v ~a and b | b v ~b were each equal to 1/2, it would follow  
that b | a = a | b for arbitrary meanings of  and b.


This would be a monstrous conclusion, because b | a and a | b can have any  
ratio from zero to infinity.


Instead of supposing that a | a v ~a = 1/2, we may more reasonably  
conclude, when the hypothesis is the truism, that all probabilities are  
entirely undefined except these of the truism itself and its  
contradictory, the absurdity.


This conclusion agrees with common sense and might perhaps have been  
reached without formal argument, because the knowledge of a probability,  
though it is knowledge of a particular and limited kind, is still  
knowledge, and it would be surprising if it could be derived from the  
truism, which is the expression of complete ignorance, asserting nothing.

===

-gts




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Priors and indefinite probabilities

2007-02-14 Thread Ben Goertzel


Indeed, that is a cleaner and simpler argument than the various more 
concrete PI paradoxes... (wine/water, etc.)


It seems to show convincingly that the PI cannot be consistently applied 
across the board, but can be heuristically applied to certain cases but 
not others as judged contextually appropriate.


This of course is one of the historical arguments for the subjective, 
Bayesian view of statistics; and also for the interval representation of 
probabilities (so when you don't know what P(A) is, you can just assign 
it the interval [0,1])


Ben

gts wrote:

Tying together recent threads on indefinite probabilities and prior
distributions (PI, maxent, Occam)...


For those who might not know, the PI (the principle of indifference) 
advises us, when confronted with n mutually exclusive and exhaustive 
possibilities, to assign probabilities of 1/n to each of them.


In his book _The Algebra of Probable Inference_, R.T. Cox presents a 
convincing disproof of the PI when n = 2. I'm confident his argument 
applies for greater values of n, though of course the formalism would 
be more complicated.


His argument is by reductio ad absurdum; Cox shows that the PI leads 
to an absurdity. (Not just an absurdity in his view, but a monstrous 
absurdity :-)


The following quote is verbatim from his book, except that in the 
interest of clarity I have used the symbol  to mean and instead 
of the dot used by Cox. The symbol v means or in the sense of 
and/or.


Also there is an axiom used in the argument, referred to as Eq. (2.8 
I). That axiom is


(a v ~a)  b = b.

Cox writes, concerning two mutually exclusive and exhaustive 
propositions a and b...

==
...it is supposed that

a | a v ~a = 1/2

for arbitrary meanings of a.

In disproof of this supposition, let us consider the probability of 
the conjunction a  b on each of the two hypotheses, a v ~a and b v 
~b. We have


a v b | a v ~a = (a | a v ~a)[b | (a v ~a)  a]

By Eq (2.8 I) (a v ~a)  a = a and therefore

a  b | a v ~a = (a | a v ~a) (b | a)

Similarly

a  b | b v ~b = (b | b v ~b) (a | b)

But, also by Eq. (2.8 I), a v ~a and b v ~b are each equal to (a v ~a) 
 (b v ~b) and each is therefore equal to the other.


Thus

a  b | b v ~b = a  b | a v ~a

and hence

(a | a v ~a) (b | a) = (b | b v ~b) (a | b)

If then a | a v ~a and b | b v ~b were each equal to 1/2, it would 
follow that b | a = a | b for arbitrary meanings of  and b.


This would be a monstrous conclusion, because b | a and a | b can have 
any ratio from zero to infinity.


Instead of supposing that a | a v ~a = 1/2, we may more reasonably 
conclude, when the hypothesis is the truism, that all probabilities 
are entirely undefined except these of the truism itself and its 
contradictory, the absurdity.


This conclusion agrees with common sense and might perhaps have been 
reached without formal argument, because the knowledge of a 
probability, though it is knowledge of a particular and limited kind, 
is still knowledge, and it would be surprising if it could be derived 
from the truism, which is the expression of complete ignorance, 
asserting nothing.

===

-gts




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] Priors and indefinite probabilities

2007-02-14 Thread Jef Allbright
Chuckling that this is still going on, and top posting based on Ben's
prior example...

Cox's proof is all well and good, but I think gts still misses the
point:

The principle of indifference is still the *best* one can do under
conditions of total ignorance.
Any other distribution would imply some latent knowledge.

The subtle and deeper point missed by gts, and unacknowledged by Cox, is
that while it is logically true you can't get knowledge from ignorance,
as a subjective agent within a consistent reality, sometimes you just
gotta choose anyway, or you don't get to play the game.

LEADING TO THE ONLY THING REALLY INTERESTING ABOUT THIS DISCUSSION:
The deeper philosophical point that, as subjective agents, we can't
actually ask a fully specified question without having a prior of some
kind, and that by playing the game we tend always to move toward a state
of less ignorance.

The principle of indifference, or as Jaynes put it, equal information
yields equal probabilities, is beautiful in its insistence on
consistency, and there's an even greater beauty in what it says about
our place in the universe.

Ben, thanks for your diplomatic acknowledgement of both sides, below.

- Jef



Ben Goertzel wrote:
 
 Indeed, that is a cleaner and simpler argument than the various
 more concrete PI paradoxes... (wine/water, etc.)
 
 It seems to show convincingly that the PI cannot be consistently
 applied across the board, but can be heuristically applied to
 certain cases but not others as judged contextually appropriate.
 
 This of course is one of the historical arguments for the
 subjective, Bayesian view of statistics; and also for the
 interval representation of probabilities (so when you don't know
 what P(A) is, you can just assign it the interval [0,1])
 
 Ben
 
 gts wrote:
  Tying together recent threads on indefinite probabilities and
 prior
  distributions (PI, maxent, Occam)...
 
  For those who might not know, the PI (the principle of
 indifference)
  advises us, when confronted with n mutually exclusive and
 exhaustive
  possibilities, to assign probabilities of 1/n to each of them.
 
  In his book _The Algebra of Probable Inference_, R.T. Cox
 presents a
  convincing disproof of the PI when n = 2. I'm confident his
 argument
  applies for greater values of n, though of course the formalism
 would
  be more complicated.
 
  His argument is by reductio ad absurdum; Cox shows that the PI
 leads
  to an absurdity. (Not just an absurdity in his view, but a
 monstrous
  absurdity :-)
 
  The following quote is verbatim from his book, except that in the
  interest of clarity I have used the symbol  to mean and
 instead
  of the dot used by Cox. The symbol v means or in the sense of
  and/or.
 
  Also there is an axiom used in the argument, referred to as Eq.
 (2.8
  I). That axiom is
 
  (a v ~a)  b = b.
 
  Cox writes, concerning two mutually exclusive and exhaustive
  propositions a and b...
  ==
  ...it is supposed that
 
  a | a v ~a = 1/2
 
  for arbitrary meanings of a.
 
  In disproof of this supposition, let us consider the probability
 of
  the conjunction a  b on each of the two hypotheses, a v ~a and b
 v
  ~b. We have
 
  a v b | a v ~a = (a | a v ~a)[b | (a v ~a)  a]
 
  By Eq (2.8 I) (a v ~a)  a = a and therefore
 
  a  b | a v ~a = (a | a v ~a) (b | a)
 
  Similarly
 
  a  b | b v ~b = (b | b v ~b) (a | b)
 
  But, also by Eq. (2.8 I), a v ~a and b v ~b are each equal to (a
 v ~a)
   (b v ~b) and each is therefore equal to the other.
 
  Thus
 
  a  b | b v ~b = a  b | a v ~a
 
  and hence
 
  (a | a v ~a) (b | a) = (b | b v ~b) (a | b)
 
  If then a | a v ~a and b | b v ~b were each equal to 1/2, it
 would
  follow that b | a = a | b for arbitrary meanings of  and b.
 
  This would be a monstrous conclusion, because b | a and a | b can
 have
  any ratio from zero to infinity.
 
  Instead of supposing that a | a v ~a = 1/2, we may more
 reasonably
  conclude, when the hypothesis is the truism, that all
 probabilities
  are entirely undefined except these of the truism itself and its
  contradictory, the absurdity.
 
  This conclusion agrees with common sense and might perhaps have
 been
  reached without formal argument, because the knowledge of a
  probability, though it is knowledge of a particular and limited
 kind,
  is still knowledge, and it would be surprising if it could be
 derived
  from the truism, which is the expression of complete ignorance,
  asserting nothing.
  ===
 
  -gts
 
 
 
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?list_id=303
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Priors and indefinite probabilities

2007-02-12 Thread Mike Dougherty

On 2/11/07, Ben Goertzel [EMAIL PROTECTED] wrote:

We don't use Bayes Nets in Novamente because Novamente's knowledge
network is loopy.  And the peculiarities that allow standard Bayes net
belief propagation to work in standard loopy Bayes nets, don't hold up


I know what you mean by the term loopy but you should be careful how
you use it in casual conversation else you risk painting a very
different picture of NM.  :)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] Priors and indefinite probabilities

2007-02-11 Thread Benjamin Goertzel

Tying together recent threads on indefinite probabilities and prior
distributions (PI, maxent, Occam), I thought I'd make a note on the
relation between the two topics.

In the indefinite probability approach, one assigns a statement S a
truth value L,U,b,k denoting one's attachment of probability b to
the statement that: after k more observations have been made, one's
best guess regarding the probability of S will lie in [L,U].

Suppose one has already made n observations contributing evidence
regarding the truth value of S.  Based on these n observations, one
may make various guesses regarding the nature of the process
underlying S.  Each of these guesses will lead to a different forecast
regarding future observations of S, hence to different estimates
L,U,b,k.

Let H denote a model of the process underlying S.  Let D denote the
data one has gathered regarding S, so far.  Then, we may talk about
P(H|D), the probability of the hypothesis given the data; and, we may
estimate this via Bayes Rule,

P(H|D) = P(D|H) P(H) / P(D)

If H is of a tractable form then P(D|H) can be calculated or
estimated.  On the other hand P(H) is just a prior distribution that
must be assumed.  Which hypotheses do we prioritize?

This is where something like the Occam prior can come into the
indefinite probabilities framework.

In most of our practical calculations using indefinite probabilities,
we assume the observations of S will follow a beta distribution (or a
bimodal distribution if the data seems to suggest this), but this is
just a kind of quickie pragmatic assumption that one makes when there
are not many computational resources available.  A more theoretically
sound approach in general would be to use one's prior understanding of
the world (along with a prior distribution like the Occam assumption)
to rank various potential models underlying S, and then use an
appropriately weighted sum of these models to do the forecasting.
This can be done when the statement S in question is sufficiently
important to deserve such a high level of computational resource
expenditure.

So, the issues of Occam vs. maxent vs. (max physical entropy =
combination of maxent  occam) arise at this level, in the indefinite
probabilities framework.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Priors and indefinite probabilities

2007-02-11 Thread Eliezer S. Yudkowsky

Benjamin Goertzel wrote:

Tying together recent threads on indefinite probabilities and prior
distributions (PI, maxent, Occam), I thought I'd make a note on the
relation between the two topics.

In the indefinite probability approach, one assigns a statement S a
truth value L,U,b,k denoting one's attachment of probability b to
the statement that: after k more observations have been made, one's
best guess regarding the probability of S will lie in [L,U].


Ben, is the indefinite probability approach compatible with local 
propagation in graphical models?


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Priors and indefinite probabilities

2007-02-11 Thread Ben Goertzel


Eliezer,



Ben, is the indefinite probability approach compatible with local 
propagation in graphical models?




Hmmm... I haven't thought about this before, but on first blush, I don't 
see any reason why you couldn't locally propagate indefinite 
probabilities through a Bayes net...


We have code that carries out Bayes Rule and other probabilistic rules 
using indefinite probabilities, so implementing an indefinite Bayes 
net would basically be a matter of taking Bayes net code and replacing 
all the applications of probability theory operations, with the 
appropriate C++ function implementing that operation using indefinite 
probabilities.


[I note that the current indefinite probabilities code makes some 
distributional assumptions that are best views as heuristic, but it 
seems to work reasonably in the practical cases where we've tried it.  
The current code is also too slow to use for large-scale applications; 
we know how to speed it up though, and this has to be done for 
Novamente-internal reasons anyway.]


Making an Indefinite Bayes Net might be a fun thing to do, actually 
(if there is no flaw in the above thinking...)


We don't use Bayes Nets in Novamente because Novamente's knowledge 
network is loopy.  And the peculiarities that allow standard Bayes net 
belief propagation to work in standard loopy Bayes nets, don't hold up 
in Novamente, because of the way you have to update probabilities when 
you're managing a very large network in interaction with a changing 
world, so that different parts of which get different amounts of focus.  
So we use different mechanisms to avoid repeated evidence counting 
whereas in loopy Bayes nets they rely on the fact that in the standard 
loopy Bayes net configuration, extra evidence counting occurs in a 
fairly constant way across the network 

However, when you have a set of interrelated knowledge items that you 
know are going to be static for a while, and you want to be able to 
query them probabilistically, then building a Bayes Net (i.e. freezing 
part of Novamente's knowledge network and mapping it into a Bayes Net) 
may be useful.  For this reason we've discussed incorporating Bayes Nets 
as an extra tool within NM, but this hasn't been prioritized since there 
is a lot of more basic NM stuff still unimplemented...


-- Ben

-- Ben



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303