RE: [agi] Inventory of AGI projects

2002-11-06 Thread Ben Goertzel

> I think the key fact is that most of these projects are currently
> relatively inactive --- plenty of passion out there, just not a
> lot of resources.
>
> The last I heard both the HAL project and the CAM-brain project
> where pretty much at a stand still due to lack of funding?

That is correct.

I don't think significant engineering is going on in Pei's NARS project at
the moment, either.

Cyc, Novamente, A2I2, and James' Rogers' project, on the other hand, are
quite actively being developed...

> Perhaps a good piece of information to add to a list of AGI projects
> would be an indication of the level of resources that the project has.

A categorization into "projects with an active engineering team working on
them" versus "projects that are on hold" would certainly be valuable, I
agree.

ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/



Re: [agi] Inventory of AGI projects

2002-11-06 Thread Pei Wang
Ben said

> I don't think significant engineering is going on in Pei's NARS project at
> the moment, either.

That is partially correct. I'm working on the conceptual design during the
"acadamic year", and some coding has been done in the (summer and winter)
vacations. Overall, NARS is not "on hold", though not moving in full speed.

Pei




---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/



[agi] localized and global ontologies

2002-11-06 Thread beadmaster



I 
invite others (in the bcc) to engage in this discussion at:
 
http://groups.yahoo.com/group/NaturesPattern/
 
Laura 
is the host and one can read a bit about her way of looking at things at the 
home page given above.
 
Don 
Mitchell has made a precise and defensible description of the profound "social" 
problems that have been caused by the confusion that is the current computer 
science, in particular the confusion that scientific reductionism as reflected 
in most computer science causes when thinking about the definition of a machine 
intelligence.
 
Don 
(his communication is cc'ed below) is responding to another member of this 
forum, Roger, who makes fun of the notion of what I and a few others have been 
calling "implicit ontology".  As an example of an "implicit ontology" one 
can point to Latent Semantic Indexing or Scatter - gather feature and 
categorization technology, or attractor neural networks.  In these systems 
there is no there there it is all distributed.
 
These 
are new concepts, this notion of implicit machine ontology; and yet one may 
claim that the philosophical notion of an ontology is better matched by 
structural holonomy and topological notions as opposed to the artificial notion 
that there can be such a thing as a local ontology that has the form of a crisp 
set of rules and logical atoms.  I will define these terms if anyone 
wishes.  The are represented in my written work.  The area is referred 
to by Peircean scholars as topological algebra.  The deep work is also 
called quasi axiomatic theory (Finn, Pospelov, Osipov).  These things are 
related also the the quantum neurodynamics of Bohm, Penrose, Hameroff, Pribram 
and others... { I hope that I am not thought, by anyone, to be glossing 
over differences that are found in these scholars' published works. There are 
profound differences.  But on the issues of stratified complexity there is 
great commonality. }
 
Don 
has a natural talent for seeing and working on the differential conversions 
between machine implicit ontology such as an LSI engine (or the generalized 
Latent Semantic Indexing methodology that he and I started to co-discover two 
years ago.) and explicit ontologies such as found in machine translation systems 
or, say, the Cyc ontology or Topic Maps (done correctly).
 
I will 
stand on the stage, anytime, and defend his presentation, as long as the 
debate is sensible, polite and scholarly. - and does not involve interspersed 
text (which is take the privilege of never reading anymore) .. 
.
 
As we, 
the community of knowledge scientists, move towards the first part of the up 
coming Manhattan Project to establish the Knowledge Sciences we as a community 
must stand up against those who would re-fund and re-deliver a failed notion 
that computer programs have the same nature as a living system.  A computer 
program is an abstraction:
 
 
http://www.bcngroup.org/area3/pprueitt/kmbook/Chapter2.htm
 
 
and 
these abstractions are vital to the modern world.  But to sell the notion 
of human intelligence short is to divorce our society from that which is 
human and this action is exactly the opposite of what we need to now 
do.  It is a mistake on several levels.  
 
One 
may observe that this mistake is driven by the various forms of fundamentalism, 
including religious and scientific reductionisms - and by economic 
reductionism.  No is is saying (at least not me) that great value has not 
come from religion and the current reductionist sciences.  But at core 
these social practices become "incorrect" exactly at the point that they become 
purely reductionist and self centered.  Currently economic reductionism 
(pure capitalism) is held in control by democracy; but an artificial general 
intelligence, if constructed to be "superior" to human intelligence will end 
this great experiment in democracy, one way of the other.  As Don points 
out "it" will be superior only in an artificial way, perhaps reinforced into a 
type of Penrose "the emperor has no cloths" metaphor.  
 
Dr. 
Ben Goertzel's deep work on implicit ontology is important not in the 
development of something that can not be (by nature) but in the development of 
something unexpected and new and thus in great need for definition.  But 
let us not call this "intelligence" as the work already has a meaning that is 
violated by this notion of a computer intelligence.  Let us work on our 
language so that there is no unnecessary confusion.  (I say to my friend, 
Ben.)  I think that Don is leading the way here.
 
I also 
agree that the community must develop the ability to engage and matriculate new 
PhDs in areas that Don is defining.
 
A new 
type of knowledge technology is being made available that depends in a natural 
way on human sensory and cognitive acuity, and over comes this false notion that 
a computer program is anything other than a simple 
abstraction.
 
Bringing this knowledge technology forward through the 
cultural resist

[agi] RE: localized and global ontologies

2002-11-06 Thread beadmaster



Ben,
 
I am 
coming to understand that it is not a small matter of little importance, this 
notion that a computer program can achieve anything that should properly be 
called "intelligence".  There is more than a philosophical difference here, 
between you and I.  
 
It is 
a question also of what one attempts to do and what one does not spend (other 
people's) energy on.
 
You 
see, for me complexity is defined in a way that must produce a halting condition 
for a computer program, simply because complexity (defined by Kugler from his 
study of Rosen) is exactly when "a = a" can not be 
determined.
 
By 
definition the computer program can not see and can not compute anything that 
has any degree of complexity to it - despite what the academic computer 
science professors say.  Computer science has no grounding in 
observational science, except in the limited sence of never looking at the 
natural world (at all - ever). 
 
It is 
turtles all the way down.  Period.  And there is only so many of these 
turtles before one get to the final abstraction of being "on" or "off".  
Stratified complexity will make this clear - and finding and holding to this 
clarity is what the Manhattan Project must be about.  This is not a small 
matter.  It is a matter of putting computer science in it place after 
several decades of complete domination over all of things.
 
When 
one looks closely one sees that the issue is not computer science but 
"abstraction".  Computer science works because abstractions produce the 
programming languages and the hard reification of electromagnetic waves into on 
and off states.  This is engineering to produce an engineering 
tool.    Each "on" is "exactly" the same as any other "on".  

 
But in 
natural reality we have something called similarity - but we do not have 
"exactness" ever.  No two "things" are ever exactly the same.  Rosen 
did not state it this way exactly, at least I have not found a way to quote him; 
but Peter Kugler's work on Rosen's literature lead me to see that the category 
error (mistaking a formal system for a natural system) was the CAUSE of the 
confusion around AI.  I then traced this cause further into what I have 
come to call the religion of scientific reductionism, and to a IT industry whose 
production of snake oil is only rivaled by the "medicine men" who traveled from 
town to town in the early west selling tonics.
 
The 
very Nation is in some trouble over the failure of IT to produce any sort of 
real "information exchanges" .  Some of this failure is due to the wrong 
mindedness of the AI camp and of the reductionist at NSF, NIST and DARPA that 
are focused on professional careers and not on the development of true 
science.
 
I do 
agree with you about so many things, and it is a shear joy to know 
you.
 
When 
you say:
 
"
I see the Manhattan Project for KM has 
having five main 
aspects:
 
 
1* Actually building a huge integrative database out of 
existing structured databases
2* Creating tools for creating structured data out of 
unstructured data (e.g. text)
3* Creating tools for browsing the integrative 
database
4* Creating tools to encourage humans to 
collaboratively and individually insert new knowledge into the 
database
5* Creating computational tools to create new data out 
of old, and put it in the database
 
 
AI plays a role in 3 and 5.
 
But for starters, it may be that 1, 3 and 4 are our 
greatest concerns.  They're "easy" technically yet difficult to execute 
politically & socially...
 
In this picture, Paul's and my disagreement on the 
relation of intelligence & computation is really a small 
matter.  It has to do with the amount of power that can be achieved in 
5, via computational intelligence alone without significant human 
participation.
 
"
I 
would argue that the most important aspect is the design of cognitive 
neuroscience grounded science of Human Information Interaction (or as it is 
being called, HII).  It is clear that the man/machine interface has much to 
gain by the improvement of data aggregation and convolution processes, (such as 
those by Novamente, CCM/LSI, and Primentia and other new methods) as well 
as by the improvement of the cognitive skill that humans might develop based on 
what the computer programs can actually do.
 
So I 
recognize that there is a great need for the Novamente engine and for work like 
the work I am doing on Latent Semantic Indexing and generalized LSI.  But I 
would change each of the five aspects to read:
 

 
1* Actually building a huge integrative database out of 
existing structured databases
--> Develop 
schema-independent means for communicating and storing both semi-structured and 
structured data.  
 
2* Creating tools for creating structured data out of 
unstructured data (e.g. text)
--> This is the 
Implicit to Explicit Ontology conversion process that I have recently called 
"Differential Ontology", but this process must have human decisions within EACH 
phase of a l

RE: [agi] Spatial Reasoning: Modal or Amodal?

2002-11-06 Thread James Rogers
On Sun, 2002-11-03 at 19:19, Ben Goertzel wrote:
> James Rogers wrote:
> > In practice, the
> > exponent can be
> > sufficiently small (and much smaller than I think most people
> > believe) that
> > it becomes tractable for at least human-level AGI on silicon (my
> > estimate),
> > though it does hit a ramp sooner than later.
> 
> This is an interesting claim you're making, but without knowing the basis
> for your estimate, I can't really comment intelligently.
> 
> I tend to doubt your estimate is correct here, but I'm open-minded enough to
> realize that it might be.  If you ever feel it's possible to share details,
> let me know!


As I recall, the rough estimate using our current architecture put the
memory ramp at somewhere around a trillion neurons equivalent ("neuron
equivalent" being a WAG mapping of structure -- feel free to ignore it).
Somewhere around there (+/- an order of magnitude) it starts to become
fairly expensive in terms of additional memory requirements required for
modest gains in effective intelligence.

Of course, the machine required to hit this ramp would still be
exceptionally large by today's standards.

 
> > There is a log(n) algorithm/structure that essentially does this, and it
> > works nicely using maspar too.  It does have a substantially more complex
> > concept of "meta-program" though.
> 
> What exactly does the program you're referring to do?  And what is your n?
> Is it the same as my L?


I'm referring to a time-complexity of log(n), where "n" is essentially
your "L". A critical difference algorithmically is that the algorithm
selects the optimal program that is currently "knowable" in L (i.e. a
limits of prediction problem), rather than the globally optimal
algorithm in L.

You would quite obviously be correct about the tractability if someone
actually tried to brute force the entire algorithm space in L.  The
knowability factor means that we don't always (hardly ever?) get the
best algorithm, but it learns and adapts very fast and this
automatically sieves the L-space into something very tractable.

 
> If your log(n) the is time complexity, what's the corresponding space
> complexity, and how many processors are required?  Exponential in n?  (One
> can do a lot with maspar with an exponential number of processors!!)


Space complexity as a function of L is exponential (or at least the data
structure is), though the exponent is reasonable.  The maspar bit just
means that the algorithm and data structure that we do this in is
obviously and naturally suited to massively parallel systems.

 
> > More to the point:  I am involved in a commercial venture related to AGI,
> > and the technology is substantially more developed and advanced than I can
> > talk about without lawyers getting involved.  It is sufficiently sexy that
> > it has attracted quite a bit of smart Silicon Valley capital, which is no
> > small feat for any company over the last year or two, never mind
> > any outfit
> > working with "AI".
> 
> yeah, I know your situation (though it's good you mentioned it, so that
> other list members can know too)..


Actually, this situation is a little different. I've dabbled in the
commercial aspects for some time but always pulled back because I
decided that I wasn't ready.  This is actually the real deal
business-wise and relatively recent, not yet at its first birthday.

 
> I assume that your Silicon Valley funding is oriented primarily toward one
> or two vertical-market applications of your technology, rather than oriented
> primarily toward AGI... but that your software is usable in a narrow-AI way
> in the short term, while being built toward AGI in the medium term...


Believe it or not, the people behind the outside funding have a clear
concept of this as an AGI company rather than an application company in
some narrow vertical market, though obviously the initial public
manifestations and demonstrations will actually be vertical applications
that use the AGI technologies.  AGI by itself doesn't do a whole lot --
it mostly just sits around the house drinking my booze. :-)  

A little background: I had some advantage in this in that I've been
around Silicon Valley for over a decade and know quite a few people here
in the capital markets. I have a good reputation here for solving hard
software problems independent of any work on AGI, and I've done quite a
bit of work as a "goto" engineering problem solver for venture
investors.  Knowing all these venture markets people, I very carefully
filtered and selected who I would involve in this, with a major
criterion being individuals who were smart enough to understand and know
what they were looking at.  I didn't even want to talk to people who
wouldn't immediately see past vertical applications and recognize the
capabilities of the core technology in itself.  This is the context in
which I sought (and found) backers: People who knew my background and
reputation well-enough that they wouldn't wonder if I had a 

RE: [agi] RE: localized and global ontologies

2002-11-06 Thread Ben Goertzel



 
Paul,
 
Since you don't like interspersed text, 
I'm going to try to respond to your interesting comments in a mostly 
non-interspersed way...
 
1)
I'd love to see precise definitions of 
what you mean by "intelligence" and "complexity."  

 
When you make claims like "no computer 
program can achieve... intelligence" or "By definition a computer program cannot 
... compute anything that has any degree of complexity in it", you are using 
these words in fairly eccentric ways.
 
I'm not saying your definitions are bad, 
just that I don't understand what they are.  I have only a vague intuitive 
feeling for what they are.
 
2)
I view the mind as a system of patterns, 
which has a dynamic of recognizing patterns in itself and in the outside world, 
and emergent btw itself and the outside 
world.
 
In my view, it's the patterns that are 
important, not whether they emerge from bits in RAM & processor, or 
electrical currents in the brain...
 
This philosophy ties in with Peirce, 
Nietzsche and others, and it's what Ray Kurzweil calls 
"patternism"
 
I think this view is actually 
rather different from standard scientific 
reductionism...
 
3)
About the Manhattan Project for 
Knowledge, others on the list may need a little background 
here.
 
This is a project Paul and I have been discussing, 
related to AGI, which involves basically creating a huge dynamic data repository 
containing ideally "all human knowledge".  We would like to obtain 
substantial government funding for this project, and Paul has some connections 
that may potentially be helpful in this regard, over the medium term (or 
conceivably even the short 
term).
 
This project relates to AGI in two ways: it would be a 
heck of a resource for an AGI; and in my view AGI will be very very useful for 
making sense of all this knowledge, both independently and in collaboration with 
humans.
 
Now, Paul, I will respond to your 
responses 1 by 1 
...
 
n* are Ben's original 
statements
--> are Paul's 
comments
!! are Ben's comments on Paul's 
comments
 
 
 1* Actually 
building a huge integrative database out of existing structured databases 
 --> Develop 
schema-independent means for communicating and storing both semi-structured and 
structured data.   
!! Developing schema-independent means for 
communicating and storing data is important, but it's important largely because 
it allows us to build the integrative DB I mentioned.  Novamente's 
knowledge representation langauge is one schema-independent approach that can 
handle every common type of data fairly 
gracefully...
 
2* Creating tools for creating structured data out of 
unstructured data (e.g. text) 
 --> This is the 
Implicit to Explicit Ontology conversion process that I have recently called 
"Differential Ontology", but this process must have human decisions within EACH 
phase of a loop Implicit - Explicit  > Explicit to Implicit..  
The reasons are many, but avoiding false sense making is the more 
important.  Computer programs to not exist, in the world, and can not 
achieve a pragmatic axis...and thus the machine ontology can be come anything 
including something that has no relationship to any part of the natural 
world.  Without a topic map type reification process (that must involve 
humans) then the ontology has no way of making the fine adjustments that are so 
clear to a natural intelligence. 
!! I agree that in the short run humans must be 
involved in every step of this process.  I think that in the long run AGI's 
will be able to do this without human help, but that doesn't affect my ideas 
about short-term 
strategy.
 
 3* 
Creating tools for browsing the integrative database 
 --> I would say that 
this issue goes away if other issues are addressed proper.  Thee is a 
"by-pass' that makes the notion of "integrative database" collapse to jus t 
 "database". 
!!  Sure, the integrative DB of all world 
knowledge is just a DB ... but it's a DB with very diverse information, and we 
don't right now have good UI tools for browsing such a 
thing
 
4* Creating tools to encourage humans to 
collaboratively and individually insert new knowledge into the database 
 --> People already 
collaborate in many different ways, it is not the computer that is needed to 
enhance this natural activity but rather it is the computer, under the current 
use patterns, that inhibits this collaboration. 
!! Paul, this just doesn't 
make sense.  How is the computer inhibiting our collaboration right 
now?  How is it inhibiting the collaboration between myself and 
my collaborators in Belo Horizonte???  Without computers (e-mail, CVS, 
etc. etc.), I couldn't viably collaborate with my Brazilian team at 
all  And you and I would not be having this 
dialogue!!!
 
 
 5* Creating 
computational tools to create new data out of old, and put it in the 
database 
- -> Why store useless 
things.  One needs to create educational processes that provides humans the 
ability to understand better and deepe

RE: [agi] Spatial Reasoning: Modal or Amodal?

2002-11-06 Thread Ben Goertzel

James Rogers wrote:
> You would quite obviously be correct about the tractability if someone
> actually tried to brute force the entire algorithm space in L.  The
> knowability factor means that we don't always (hardly ever?) get the
> best algorithm, but it learns and adapts very fast and this
> automatically sieves the L-space into something very tractable.

Any estimates of the average error incurred by searching only the locally
knowable space instead of the whole space?

> Actually, this situation is a little different. I've dabbled in the
> commercial aspects for some time but always pulled back because I
> decided that I wasn't ready.  This is actually the real deal
> business-wise and relatively recent, not yet at its first birthday.

Well, congratulations!!

> Contrary to some rumors, there are a lot of very smart and
> forward-thinking venture funding types in Silicon Valley in addition to
> the usual business school idiots.  Some of them can even talk about
> algorithmic information theory (the theoretical basis of our technology)
> at a shallow level without getting a "deer in the headlights" look.

There are indeed some very smart VC's out there -- and not only in Silicon
Valley.

However, I think you'll agree that they are definitely in the minority!

> As you mention, it is pretty hard to get proper funding for anything
> relating to AGI, especially when it is pretty early in the R&D stage.
> I've actually been working on this AGI technology since the mid-90's,
> though originally I only got involved at all trying to solve a
> particularly difficult adaptive optimization problem for a client. It
> has been essentially self-funded to this point, and it took me a long
> time to develop it to the point where I felt the technology could be
> sold in a marketplace that has a very jaded and skeptical view of "AI".
>
> I'm also an investor/instigator in another venture which has done very
> well and generally made full-funding of the AGI venture fairly certain
> regardless.  Patience, hard work, and all of that; I planned to make
> this happen one way or another. :-)

Well, it all sounds very interesting, and it's really too bad you're not at
this point in a position to share the scientific details with the rest of
us

Without either

a) the scientific details, or
b) an impressive AGI demonstration to play with

it's obviously not possible for me to intelligently assess how close I think
you are to really cracking the AGI problem...

-- Ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/



RE: [agi] Spatial Reasoning: Modal or Amodal?

2002-11-06 Thread James Rogers
On Wed, 2002-11-06 at 19:24, Ben Goertzel wrote:
> James Rogers wrote:
> > You would quite obviously be correct about the tractability if someone
> > actually tried to brute force the entire algorithm space in L.  The
> > knowability factor means that we don't always (hardly ever?) get the
> > best algorithm, but it learns and adapts very fast and this
> > automatically sieves the L-space into something very tractable.
> 
> Any estimates of the average error incurred by searching only the locally
> knowable space instead of the whole space?


No friggin' clue.  I'm not even sure if this could be meaningfully
calculated, or at least I am not sure what "average error" actually
means in this context.  The resource savings are a fair trade-off for
the efficacy though.

On the other hand, the brute force perspective and knowable perspective
of the space seem to be fundamentally different at an important
functional level, but my brain is tired right now so I'm not going to
think about it too hard. :-)


Cheers,

-James Rogers
 [EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/