Re: [agi] OpenCog Prime complex systems [was MOVETHREAD ... wikibook and roadmap ...

2008-08-02 Thread Ben Goertzel
Hi...

About OCP and Eliezer ...

This is another topic that was bound to come up!

OpenCogPrime is the design and approach of myself and a number of my
colleagues ... but it's not Eliezer's design or approach

Eliezer and I have many points of agreement, many points of disagreement,
and many points of ongoing debate and discussion ... and of course,
considerable mutual respect for one another as thinkers 

I'm not going to try to put words in his mouth ... but I will say some
fairly obvious things on this theme...

1)
for sure, we would both prefer a provably beneficial AGI system.

2)
He is more optimistic than I am that a provably beneficial (or provably
Friendly, etc.) AGI system is a feasible goal.

3)
we both agree that it would be dangerous to allow an AGI system whose
ethical nature was poorly understood to achieve a high level of intelligence
and/or practical power

4)
I tend to be of the opinion that a useful theory of AGI ethics is more
likely to come out of a combination of theory and experimentation
(experimentation with AGi systems w/ general intelligence below that of an
adult human, but much greater than that of existing AI programs), than out
of pure armchair theorizing...

5)
I am more optimistic than Eliezer about the AGI potential of OpenCogPrime
type systems


-- Ben G

On Sat, Aug 2, 2008 at 2:31 AM, Terren Suydam [EMAIL PROTECTED] wrote:


 What I don't understand is how Eliezer Yudkowski, whose focus is on
 Provably Friendly AGI, could be happy with a design that emphasizes
 emergence as the explanatory bridge that crosses the gap between design and
 self-awareness. Ben, is this a point of contention between you two or does
 Eliezer endorse your approach?

 Specifically, how could the concept of Friendliness be programmed in, in
 prior fashion, when the relevant structures (self, other) must emerge after
 the software's been written?

 Terren

 --- On *Fri, 8/1/08, David Hart [EMAIL PROTECTED]* wrote:


 It is intended that correct and efficient learning methodologies will be
 influenced by emergent behaviors arising from elements of interaction
 (beginning at the inter-atom level) and tuning (mostly at the MindAgent
 level), all of which is carefully considered in the OCP design (although
 not yet explicitly and thoroughly explained in the wikibook).

 -dave

  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog Prime complex systems [was MOVETHREAD ... wikibook and roadmap ...]

2008-08-02 Thread Richard Loosemore

David Hart wrote:
On 8/2/08, *Richard Loosemore* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Thus:  in my paper there is a quote from a book in which Conway's
efforts were described, and it is transparently clear from this
quote that the method Conway used was random search:


I believe this statement misinterprets the quote and severely 
underestimates the amount of thought and design inherent in Conway's 
invention. In my option, the stochastic search methodologies (practiced 
mainly by his students) can be considred 'tuning/improvement/tweaking' 
and NOT themselves part of the high-level conceptual design. But, this 
topic is a subjective interpretation rabbithole that is probably not 
worth pursuing further.


Back on the topic of OpenCog Prime, I had typed up some comments on the 
'required methodologies' thread that were since covered by Ben's 
**interactive learning** comments, but my comments may still be useful 
as they come from a slightly different perspective (although they 
require familiarity with OCP terminology found in the wikibook, and I'm 
sure Ben will chime in to correct or comment if necessary):


'Teaching' [interactive learning] should be included among those words 
loaded with much future work to be done.


'Empirical studies done on a massive scale' includes teaching, and does 
not necessarily imply using strictly controlled laboratory conditions. 
Children learn in their pre-operational and concrete-operational stages 
using their own flavor of 'methodological empirical studies' which the 
teaching stages of OCP will attempt to loosely recreate with proto-AGI 
entities within virtual worlds in a variety of both guided (structured) 
and free-form (unstructured) sessions.


The complex systems issue comes into play when considering the 
interaction of OCP internal components (expressed in code running in 
MindAgents) that modify structures of atoms (including maps, which are 
themselves atoms that encapsulate groups of atoms to store patterns of 
structure or activity mined from the atomspace) with each other and with 
the external world. A key point to consider about MindAgents is that the 
result of their operation is a proxy for the action of atoms-on-atoms. 
The rules that govern some of these inter-atom interactions are 
analogous to the rules within cellular automata systems, and are subject 
to the same general types of manipulations and observable behaviors 
(e.g. low-level logical rules, various algorithmic manipulations like 
GA, MOSES, etc, and higher-level transformations, etc.).


It is intended that correct and efficient learning methodologies will be 
influenced by emergent behaviors arising from elements of interaction 
(beginning at the inter-atom level) and tuning (mostly at the MindAgent 
level), all of which is carefully considered in the OCP design (although 
not yet explicitly and thoroughly explained in the wikibook).


The complex systems issue does not come into play in only that location. 
 Or rather, there is no basis on which you can say that it only occurs 
there.


More generally, this does not address the questions that I asked.  Was 
it meant to?





Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


RE: [agi] OpenCog Prime wikibook and roadmap posted (moderately detailed design for an OpenCog-based thinking machine)

2008-08-01 Thread Derek Zahn
Ben,
 
Thanks for the large amount of work that must have gone into the production of 
the wikibook.  Along with the upcoming PLN book (now scheduled for Sept 26 
according to Amazon) and re-reading The Hidden Pattern, there should be enough 
material for a diligent student to grok your approach.
 
I think it will take some considerable time for anybody to absorb it all, so 
don't be too discouraged if there isn't a lot of visible banter about issues 
you think are important; we all come at the Big Questions of AGI from our own 
peculiar perspectives.  Even those of us who want to believe may have 
difficulty finding sufficient common ground in viewpoints to really understand 
your ideas in depth, at least for a while.
 
If there's one thing I'd like to see more of sometime soon, it would be more 
detail on the early stages of your vision of a roadmap, to help focus both 
analysis and development.
 
Great stuff!
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


RE: [agi] OpenCog Prime wikibook and roadmap posted (moderately detailed design for an OpenCog-based thinking machine)

2008-08-01 Thread Mark Waser
I would like to second the thank you.  You posted a lot more than I expected 
and I really appreciate it (and intend to show it by thoroughly reading all of 
it and absorbing it before commenting).

Mark
  - Original Message - 
  From: Derek Zahn 
  To: agi@v2.listbox.com 
  Sent: Friday, August 01, 2008 3:41 PM
  Subject: **SPAM** RE: [agi] OpenCog Prime wikibook and roadmap posted 
(moderately detailed design for an OpenCog-based thinking machine)


  Ben,
   
  Thanks for the large amount of work that must have gone into the production 
of the wikibook.  Along with the upcoming PLN book (now scheduled for Sept 26 
according to Amazon) and re-reading The Hidden Pattern, there should be enough 
material for a diligent student to grok your approach.
   
  I think it will take some considerable time for anybody to absorb it all, so 
don't be too discouraged if there isn't a lot of visible banter about issues 
you think are important; we all come at the Big Questions of AGI from our own 
peculiar perspectives.  Even those of us who want to believe may have 
difficulty finding sufficient common ground in viewpoints to really understand 
your ideas in depth, at least for a while.
   
  If there's one thing I'd like to see more of sometime soon, it would be more 
detail on the early stages of your vision of a roadmap, to help focus both 
analysis and development.
   
  Great stuff!

   


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog

2007-12-29 Thread Richard Loosemore

Mike Dougherty wrote:

On Dec 28, 2007 1:55 PM, Richard Loosemore [EMAIL PROTECTED] wrote:

Mike Dougherty wrote:

On Dec 28, 2007 8:28 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

Actually, that would be a serious miusunderstanding of the framework and
development environment that I am building.  Your system would be just
as easy to build as any other.

... considering the proliferation of AGI frameworks, it would appear
that any other framework is pretty easy to build, no?  ok, I'm being
deliberately snarky - but if someone wrote about your own work the way
you write about others, I imagine you would become increasingly
defensive.

You'll have to explain, because I am honestly puzzled as to what you
mean here.


I am not a published computer scientist.  I recognize there are a lot
of brains here working at a level beyond my experience.  I was only
pointing out that using language like just as easy to build to
trivialize your system could be confrontational.  It may not
deliberately offend anyone, either because they are also not concerned
about this nuance or they discount your attitude as a matter of
course.


Well, no:  I think for anyone who understood what I was saying, no 
attitude would have been seen there.  None intended, certainly:  it 
was just a simple boring statement of fact, not a trivialization of anyone.



I think with slightly different sentence constructions your
ideas would be better received and sound less condescending.  That's
all I was saying on that.


I mean framework in a very particular sense (something that is a
theory generator but not by itself a theory, and which is complete
account of the domain of interest).  As such, there are few if any
explicit frameworks in AI.  Implicit ones, yes, but not explicit.  I do
not mean framework in the very loose sense of bunch of tools or
bunch of mechanisms.


hmm... I never considered framework in that context.  I thought
framework referred to more of a scaffolding to enable work.  As such,
a scaffolding makes a specific kind of building.  Though I can see how
it can be general enough to apply the technique to multiple building
designs.


As for the comment above:  because of that problem I mentioned, I have
evolved a way to address it, and this approach means that I have to
devise a framework that allows an extremely wide variety of Ai systems
to be constructed within the framework (this was all explained in my
paper).  As a result, the framework can encompass Ben's systems as
easily as any other.  It could even encompass a system built on pure
mathematical logic, if need be.


I believe I misunderstood your original statement.  This clarification
makes more sense.



Oh, nobody expects it to arise automatically - I just want the
system-building process to become more automated and less hand-crafted.


Again, I agree this is a good goal - but isn't it akin to optimizing
too early in a development process?  Sure, there are well-known
solutions to certain classes of problem.  Building a sloppy
implementation to those solutions is foolish when there are existing
'best practice' methods.  Is there currently a best practice way to
achieve AI?


Jeepers, no!!  There are narrow solutions to little issues that can be 
optimized, which arguably cannot be added to each other in any way, let 
alone integrated into a full AGI, let alone be optimal in a full AGI.


I think we are having this discussion because of a confusion about 
context.  All of this is about the particular program of research that I 
have adopted.  Within that context, there is no premature optimization 
going on:  in fact, exactly the opposite.  It is the most extreme form 
of not optimizing too early that you could possibly think of.



Let me preemptively agree that we should all continuously
strive to implement better practices than we may currently be
comfortable with - we should be doing that anyway.  (how can we build
self-improving systems if we are not examples of such ourselves)


My guess is that any system that is generalized enough to apply across
design paradigms will lack the granular details required for actual 
implementation.

On the contrary, that is why I have spent (am still spending) such an
incredible amount of effort on building the thing.  It is entirely
possible to envision a cross-paradigm framework.


With a different understanding of your use of framework I am less
dubious of this position.


Give me about $10 million a year in funding for the next three years,
and I will deliver that system to your desk on January 1st 2011.


Well, I'd love to have the cash on hand to prove you wrong.  It would
be a nice condition to have for both of us.


There is, though, the possibility that a lot of effort could be wasted
on yet another AI project that starts out with no clear idea of why it
thinks that its approach is any better than anything that has gone
before.  Given the sheer amount of wasted effort expended over the last
fifty years, I would be 

Re: [agi] OpenCog

2007-12-29 Thread YKY (Yan King Yin)
On 12/28/07, Jean-Paul Van Belle [EMAIL PROTECTED] wrote:
 IMHO more important than working towards contributing clean code would be
to *publish the (required) interfaces for the modules as well as give
standards for/details on the knowledge representation format*. I am sure
that you have those spread over various internal and published documents
(indeed, developing a system like Novamente or proposing a framework is
impossible without those) but a cut-and-paste of the relevant sections are
essential documentation for the framework. Also a concrete example of how a
third-party module would slot into this framework would be mightily useful.

 I am raising this because many would-be AGI developers have to decide on
an interface and KR standard even if they develop their own proprietory
system - lots of mileage would be gotten from not having to reinvent the
wheel.

I agree that it would be a nice thing, but it requires people to have
similar AGI architectures.

Another problem is that it's very hard to get it started, but once it's
started it would be easier for people to join since they only have to focus
on their own module.

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=80253985-7285bc

Re: [agi] OpenCog

2007-12-28 Thread Vladimir Nesov
On Dec 28, 2007 4:17 AM, Ed Porter [EMAIL PROTECTED] wrote:
 Richard,

 You are entitled to your reservations about OpenCog, but others, like me,
 are entitled to our enthusiasms about it.

 You are correct that OpenCog starts with a certain approach, but I think it
 is an approach that has a lot of promise, and if it has fatal limitations,
 hopefully OpenCog will help us learn about them, so either the system can be
 improved, or replaced by a better approach.

 If you have another approach, I wish you good luck with it.


I can't be too enthusiastic about OpenCog yet because I know next to
nothing about it, despite all these 'executive' publications and stray
papers about Novamente. Let's wait and see.

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=79854254-d72e0c


Re: [agi] OpenCog

2007-12-28 Thread YKY (Yan King Yin)
OpenCog is definitely a positive thing to happen in the AGI scene.  It's
been all vaporware so far.

I wonder what would be the level of participation?

Also I think it's going to increase the chance of a safe takeoff, by
exposing users and developers gradually to AGI.  But we also need to have
some security measures.

I look forward to seeing it!

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=79861421-6f527c

Re: [agi] OpenCog

2007-12-28 Thread Richard Loosemore

Benjamin Goertzel wrote:

I wish you much luck with your own approach   And, I would imagine
that if you create a software framework supporting your own approach
in a convenient way, my own currently favored AI approaches will not
be conveniently explorable within it.  That's the nature of framework-building.


Actually, that would be a serious miusunderstanding of the framework and 
development environment that I am building.  Your system would be just 
as easy to build as any other.


My purpose is to create a description language that allows us to talk 
about different types of AGI system, and then construct design 
variations autonmatically.





Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=79873601-00cc5e


Re: [agi] OpenCog

2007-12-28 Thread Benjamin Goertzel
On Dec 28, 2007 5:59 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:

 OpenCog is definitely a positive thing to happen in the AGI scene.  It's
 been all vaporware so far.

Yes, it's all vaporware so far ;-)

On the other hand, the code we hope to release as part of OpenCog actually
exists, but it's not yet ready for opening-up as some of it needs to
be extracted from
the overall Novamente code base, and other parts of it need to be cleaned-up
in various ways...

Much of the reason for yakking about it months in advance of releasing it, was a
desire to assess the level of enthusiasm for it.  There are a number
of enthusiastic
potential OpenCog developers on the OpenCog mail list, so in that regard, I feel
the response has been enough to merit proceeding with the project...


 I wonder what would be the level of participation?

Time will tell!

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=79870666-e314ea


Re: [agi] OpenCog

2007-12-28 Thread Mike Dougherty
On Dec 28, 2007 8:28 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Actually, that would be a serious miusunderstanding of the framework and
 development environment that I am building.  Your system would be just
 as easy to build as any other.

... considering the proliferation of AGI frameworks, it would appear
that any other framework is pretty easy to build, no?  ok, I'm being
deliberately snarky - but if someone wrote about your own work the way
you write about others, I imagine you would become increasingly
defensive.

 My purpose is to create a description language that allows us to talk
 about different types of AGI system, and then construct design
 variations autonmatically.

I do believe an academic formalism for discussing AGI would be
valuable to allow different camps to identify their
similarity/difference in approach and implementation.  However, I do
not believe that AGI will arise automatically from meta-discussion.
My guess is that any system that is generalized enough to apply across
design paradigms will lack the granular details required for actual
implementation.  I applaud the effort required to succeed at your
task, but it does not seem to me that you are building AGI as much as
inventing a lingua franca for AGI builders.

I admit in advance that I may be wrong.  This is (after all) just a
friendly discussion list and nobody's livelihood is being threatened
here, right?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=79882049-5a2bf8


Re: [agi] OpenCog

2007-12-28 Thread Jean-Paul Van Belle
IMHO more important than working towards contributing clean code would be to 
*publish the (required) interfaces for the modules as well as give standards 
for/details on the knowledge representation format*. I am sure that you have 
those spread over various internal and published documents (indeed, developing 
a system like Novamente or proposing a framework is impossible without those) 
but a cut-and-paste of the relevant sections are essential documentation for 
the framework. Also a concrete example of how a third-party module would slot 
into this framework would be mightily useful.

I am raising this because many would-be AGI developers have to decide on an 
interface and KR standard even if they develop their own proprietory system - 
lots of mileage would be gotten from not having to reinvent the wheel.

=Jean-Paul
-- 

Research Associate: CITANDA
Post-Graduate Section Head 
Department of Information Systems
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21


 On 2007/12/28 at 14:59, in message
[EMAIL PROTECTED], Benjamin
Goertzel [EMAIL PROTECTED] wrote:
 On Dec 28, 2007 5:59 AM, YKY (Yan King Yin)
 [EMAIL PROTECTED] wrote:

 OpenCog is definitely a positive thing to happen in the AGI scene.  It's
 been all vaporware so far.
 
 Yes, it's all vaporware so far ;-)
 
 On the other hand, the code we hope to release as part of OpenCog actually
 exists, but it's not yet ready for opening-up as some of it needs to
 be extracted from
 the overall Novamente code base, and other parts of it need to be cleaned-up
 in various ways...
 
 Much of the reason for yakking about it months in advance of releasing it, 
 was a
 desire to assess the level of enthusiasm for it.  There are a number
 of enthusiastic
 potential OpenCog developers on the OpenCog mail list, so in that regard, I 
 feel
 the response has been enough to merit proceeding with the project...
 
 
 I wonder what would be the level of participation?
 
 Time will tell!
 
 -- Ben
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email 
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=79895084-0bd555

Re: [agi] OpenCog

2007-12-28 Thread Benjamin Goertzel
On Dec 28, 2007 8:28 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Benjamin Goertzel wrote:
  I wish you much luck with your own approach   And, I would imagine
  that if you create a software framework supporting your own approach
  in a convenient way, my own currently favored AI approaches will not
  be conveniently explorable within it.  That's the nature of 
  framework-building.

 Actually, that would be a serious miusunderstanding of the framework and
 development environment that I am building.  Your system would be just
 as easy to build as any other.

 My purpose is to create a description language that allows us to talk
 about different types of AGI system, and then construct design
 variations autonmatically.

I don't believe it is possible to create a framework that both

a) is unbiased regarding design type

b) makes it easy to construct AGI designs

Just as different programming languages are biased toward different types
of apps, so with different AGI frameworks...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=79885135-d592af


Re : [agi] OpenCog

2007-12-28 Thread Bruno Frandemiche
http://gbbopen.org/



- Message d'origine 
De : Benjamin Goertzel [EMAIL PROTECTED]
À : agi@v2.listbox.com
Envoyé le : Vendredi, 28 Décembre 2007, 15h14mn 10s
Objet : Re: [agi] OpenCog

On Dec 28, 2007 8:28 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Benjamin Goertzel wrote:
  I wish you much luck with your own approach  And, I would imagine
  that if you create a software framework supporting your own approach
  in a convenient way, my own currently favored AI approaches will not
  be conveniently explorable within it.  That's the nature of 
  framework-building.

 Actually, that would be a serious miusunderstanding of the framework and
 development environment that I am building.  Your system would be just
 as easy to build as any other.

 My purpose is to create a description language that allows us to talk
 about different types of AGI system, and then construct design
 variations autonmatically.

I don't believe it is possible to create a framework that both

a) is unbiased regarding design type

b) makes it easy to construct AGI designs

Just as different programming languages are biased toward different types
of apps, so with different AGI frameworks...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;


  
_ 
Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail 
http://mail.yahoo.fr

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=79906306-182ce3

Re: [agi] OpenCog

2007-12-28 Thread Richard Loosemore

Benjamin Goertzel wrote:

On Dec 28, 2007 8:28 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

Benjamin Goertzel wrote:

I wish you much luck with your own approach   And, I would imagine
that if you create a software framework supporting your own approach
in a convenient way, my own currently favored AI approaches will not
be conveniently explorable within it.  That's the nature of framework-building.

Actually, that would be a serious miusunderstanding of the framework and
development environment that I am building.  Your system would be just
as easy to build as any other.

My purpose is to create a description language that allows us to talk
about different types of AGI system, and then construct design
variations autonmatically.


I don't believe it is possible to create a framework that both

a) is unbiased regarding design type


Nobody says unbiased.


b) makes it easy to construct AGI designs


Then you have not been paying attention :-) (because I know for a fact 
that I have said this to you in the past )


I am specifically targetting the problem of making it easier.

In my environment your Novamente system would be harder to implement 
than a system that is better suited to my framework, BUT the point of 
all the effort I am making is that your system would be (e.g.) ten times 
easier to build than it is now, whereas my type of AGI design would be 
(e.g.) a thousand times easier to build than it would be if I had to 
hand craft it using the currently available tools.  Either way, it would 
be easier.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=80022516-3d8694


Re: [agi] OpenCog

2007-12-28 Thread Richard Loosemore

Mike Dougherty wrote:

On Dec 28, 2007 8:28 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

Actually, that would be a serious miusunderstanding of the framework and
development environment that I am building.  Your system would be just
as easy to build as any other.


... considering the proliferation of AGI frameworks, it would appear
that any other framework is pretty easy to build, no?  ok, I'm being
deliberately snarky - but if someone wrote about your own work the way
you write about others, I imagine you would become increasingly
defensive.


You'll have to explain, because I am honestly puzzled as to what you 
mean here.


I mean framework in a very particular sense (something that is a 
theory generator but not by itself a theory, and which is complete 
account of the domain of interest).  As such, there are few if any 
explicit frameworks in AI.  Implicit ones, yes, but not explicit.  I do 
not mean framework in the very loose sense of bunch of tools or 
bunch of mechanisms.


And in my comment to Ben, I said any other in reference to a 
particular AI system, not referring to frameworks at all.


As for the way I write about others' work.  I don't understand.  I 
have done a particular body of research in AI/cognitive science, and as 
a result I have published a paper in which I have explained that there 
is a very serious problem with the methodological foundations of all 
current approaches to AI.  As a result I am obliged to point out that 
many things said about AI fall within the scope of that problem.  This 
is not personal nastiness on my part, just a consequence of the research 
I have done.  Should anyone become defensive or offended by that?  Not 
at all.  So I am confused.


As for the comment above:  because of that problem I mentioned, I have 
evolved a way to address it, and this approach means that I have to 
devise a framework that allows an extremely wide variety of Ai systems 
to be constructed within the framework (this was all explained in my 
paper).  As a result, the framework can encompass Ben's systems as 
easily as any other.  It could even encompass a system built on pure 
mathematical logic, if need be.


This is not a particularly dramatic statement.


My purpose is to create a description language that allows us to talk
about different types of AGI system, and then construct design
variations autonmatically.


I do believe an academic formalism for discussing AGI would be
valuable to allow different camps to identify their
similarity/difference in approach and implementation.  However, I do
not believe that AGI will arise automatically from meta-discussion.


Oh, nobody expects it to arise automatically - I just want the 
system-building process to become more automated and less hand-crafted.



My guess is that any system that is generalized enough to apply across
design paradigms will lack the granular details required for actual
implementation.


On the contrary, that is why I have spent (am still spending) such an 
incredible amount of effort on building the thing.  It is entirely 
possible to envision a cross-paradigm framework.


Give me about $10 million a year in funding for the next three years, 
and I will deliver that system to your desk on January 1st 2011.



I applaud the effort required to succeed at your
task, but it does not seem to me that you are building AGI as much as
inventing a lingua franca for AGI builders.


Not really.  I don't want a lingua franca as such, I just need the LF as 
part of the process of addressing the complex systems problem.



I admit in advance that I may be wrong.  This is (after all) just a
friendly discussion list and nobody's livelihood is being threatened
here, right?


No, especially since few people are being paid full time to work on AGI 
projects.


There is, though, the possibility that a lot of effort could be wasted 
on yet another AI project that starts out with no clear idea of why it 
thinks that its approach is any better than anything that has gone 
before.  Given the sheer amount of wasted effort expended over the last 
fifty years, I would be pretty upset to see it happen yet again.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=80020995-5b8a2d


Re: [agi] OpenCog

2007-12-28 Thread Mike Dougherty
On Dec 28, 2007 1:55 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Mike Dougherty wrote:
  On Dec 28, 2007 8:28 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
  Actually, that would be a serious miusunderstanding of the framework and
  development environment that I am building.  Your system would be just
  as easy to build as any other.
 
  ... considering the proliferation of AGI frameworks, it would appear
  that any other framework is pretty easy to build, no?  ok, I'm being
  deliberately snarky - but if someone wrote about your own work the way
  you write about others, I imagine you would become increasingly
  defensive.

 You'll have to explain, because I am honestly puzzled as to what you
 mean here.

I am not a published computer scientist.  I recognize there are a lot
of brains here working at a level beyond my experience.  I was only
pointing out that using language like just as easy to build to
trivialize your system could be confrontational.  It may not
deliberately offend anyone, either because they are also not concerned
about this nuance or they discount your attitude as a matter of
course.  I think with slightly different sentence constructions your
ideas would be better received and sound less condescending.  That's
all I was saying on that.

 I mean framework in a very particular sense (something that is a
 theory generator but not by itself a theory, and which is complete
 account of the domain of interest).  As such, there are few if any
 explicit frameworks in AI.  Implicit ones, yes, but not explicit.  I do
 not mean framework in the very loose sense of bunch of tools or
 bunch of mechanisms.

hmm... I never considered framework in that context.  I thought
framework referred to more of a scaffolding to enable work.  As such,
a scaffolding makes a specific kind of building.  Though I can see how
it can be general enough to apply the technique to multiple building
designs.

 As for the comment above:  because of that problem I mentioned, I have
 evolved a way to address it, and this approach means that I have to
 devise a framework that allows an extremely wide variety of Ai systems
 to be constructed within the framework (this was all explained in my
 paper).  As a result, the framework can encompass Ben's systems as
 easily as any other.  It could even encompass a system built on pure
 mathematical logic, if need be.

I believe I misunderstood your original statement.  This clarification
makes more sense.


 Oh, nobody expects it to arise automatically - I just want the
 system-building process to become more automated and less hand-crafted.

Again, I agree this is a good goal - but isn't it akin to optimizing
too early in a development process?  Sure, there are well-known
solutions to certain classes of problem.  Building a sloppy
implementation to those solutions is foolish when there are existing
'best practice' methods.  Is there currently a best practice way to
achieve AI?  Let me preemptively agree that we should all continuously
strive to implement better practices than we may currently be
comfortable with - we should be doing that anyway.  (how can we build
self-improving systems if we are not examples of such ourselves)

  My guess is that any system that is generalized enough to apply across
  design paradigms will lack the granular details required for actual 
  implementation.
 On the contrary, that is why I have spent (am still spending) such an
 incredible amount of effort on building the thing.  It is entirely
 possible to envision a cross-paradigm framework.

With a different understanding of your use of framework I am less
dubious of this position.

 Give me about $10 million a year in funding for the next three years,
 and I will deliver that system to your desk on January 1st 2011.

Well, I'd love to have the cash on hand to prove you wrong.  It would
be a nice condition to have for both of us.

 There is, though, the possibility that a lot of effort could be wasted
 on yet another AI project that starts out with no clear idea of why it
 thinks that its approach is any better than anything that has gone
 before.  Given the sheer amount of wasted effort expended over the last
 fifty years, I would be pretty upset to see it happen yet again.

Considering the amount of wasted effort in every other sector that I
have experience with, I think you should keep your expectations low.
Again, I would like to be wrong.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=80057282-a98eae


RE: [agi] OpenCog

2007-12-27 Thread Ed Porter
Richard,

You are entitled to your reservations about OpenCog, but others, like me,
are entitled to our enthusiasms about it.

You are correct that OpenCog starts with a certain approach, but I think it
is an approach that has a lot of promise, and if it has fatal limitations,
hopefully OpenCog will help us learn about them, so either the system can be
improved, or replaced by a better approach.

If you have another approach, I wish you good luck with it.

Ed Porter

-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 27, 2007 7:19 PM
To: agi@v2.listbox.com
Subject: [agi] OpenCog

Ed Porter wrote:
 OpenCog: A Software Framework for Integrative Artificial General 
 Intelligence by  Dave Hart and Ben Goertzel says
 
 Contingent upon funding for OpenCog proceeding as planned, we are 
 targeting 1H08 for our first official code release, to be accompanied by 
 a full complement of documentation, tools, and development support
 
 Is there any show of support from people on the AGI and OpenCog lists 
 that might help the funding effort, such as making small contributions 
 or writing emails to any of the major potential contributors, that might 
 help persuade them of the need, importance, and desire in the AGI 
 community for this effort?

I am sorry, but I have reservations about the OpenCog project.

The problem of building an open-source AI needs a framework-level tool 
that is specifically designed to allow a wide variety of architectures 
to be described and expressed.

OpenCog, as far as I can see, does not do this, but instead takes a 
particular assortment of mechanisms as its core, then suggests that 
people add modules onto this core.  This is not a framework-level 
approach, but a particular-system approach that locks all future work 
into the limitations of the initial core.

For example, I have many, many AGI designs that I need to explore, but 
as far as I can see, none of them can be implemented at all within the 
OpenCog system.  I would have to rewrite OpenCog completely to get it to 
meet my needs.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=79755652-91da20attachment: winmail.dat

Re: [agi] OpenCog

2007-12-27 Thread Benjamin Goertzel
Loosemore wrote:
 I am sorry, but I have reservations about the OpenCog project.

 The problem of building an open-source AI needs a framework-level tool
 that is specifically designed to allow a wide variety of architectures
 to be described and expressed.

 OpenCog, as far as I can see, does not do this, but instead takes a
 particular assortment of mechanisms as its core, then suggests that
 people add modules onto this core.  This is not a framework-level
 approach, but a particular-system approach that locks all future work
 into the limitations of the initial core.

 For example, I have many, many AGI designs that I need to explore, but
 as far as I can see, none of them can be implemented at all within the
 OpenCog system.  I would have to rewrite OpenCog completely to get it to
 meet my needs.

Hi Richard,

To be sure, OpenCog is not intended to be equally useful for all possible
AGI approaches.

To provide something equally useful for all AGI approaches, one would
need to make something extremely broad -- basically, one would need to
make a highly general-purpose
 operating-system and/or programming-language, rather than
a specific software framework.

OpenCog is designed to support a certain family of AGI designs, but
is not designed to conveniently support all possible AGI designs.

Definitely, there is room in the world for more than one AGI framework.

As an example the CCortex platform seems like it may be a good
framework within which to build biologically realistic NN based AGI
systems (note, this is based on their literature only, I've never tried
their system).

I wish you much luck with your own approach   And, I would imagine
that if you create a software framework supporting your own approach
in a convenient way, my own currently favored AI approaches will not
be conveniently explorable within it.  That's the nature of framework-building.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=79828215-b4b8b5