Re: Yawn. More definitions of intelligence? [WAS Re: [agi] Ben's Definition of Intelligence]

2008-01-14 Thread Benjamin Goertzel
Richard,

I don't think Shane and Marcus's overview of definitions-of-intelligence
is poor quality.

I think it is just doing something different than what you think it should be
doing.

The overview is exactly that: A review of what researchers have said about
the definition of intelligence.

This is useful as a view into the cultural mind-space of the research
community regarding the intelligence concept.

As for their formal definition of intelligence, I think it is worthwhile as a
precise formulation of one perspective on the multidimensional concept
of intelligence.  I don't agree with them that they have somehow captured
the essence of the concept of intelligence in their formal definition though;
I think they have just captured one aspect...

-- Ben G


On 1/14/08, Richard Loosemore [EMAIL PROTECTED] wrote:
 Pei Wang wrote:
  On Jan 13, 2008 7:40 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
 
  And, as I indicated, my particular beef was with Shane Legg's paper,
  which I found singularly content-free.
 
  Shane Legg and Marcus Hutter have a recent publication on this topic,
  http://www.springerlink.com/content/jm81548387248180/
  which is much richer in content.

 Unfortunately, this paper is not so much richer in content as
 containing a larger number of words and formulae.  It adds nothing to
 the previous (poor quality) paper, falls into exactly the same pitfalls
 as before, and repeats the trick of pulling an arbitrary mathematical
 definition out of the hat without saying why this definition should
 correspond with the natural or commonsense definition.

 Any fool can mathematize a definition of a commonsense idea without
 actually saying anything new.


 Richard Loosemore

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=85626959-5ab00c


[agi] Re: [singularity] The establishment line on AGI

2008-01-14 Thread Benjamin Goertzel
 Also, this would involve creating a close-knit community through
 conferences, journals, common terminologies/ontologies, email lists,
 articles, books, fellowships, collaborations, correspondence, research
 institutes, doctoral programs, and other such devices. (Popularization is
 not on the list of community-builders, although it may have its own value.)
 Ben has been involved in many efforts in these directions -- I wonder if he
 was thinking of Kuhn.

Indeed, working toward the formation of such a community is one of the
motivations underlying the AGI-08 conference.   And also underlying the
OpenCog AGI project I'm initiating together with the SIAI, see
opencog.org

My prior efforts in this direction, such as

-- AGI email list
-- 2006 AGI workshop
-- two AGI edited volumes

have been successful but smaller-scale.

My feeling is that the time is ripe for the self-organization of a really viable
AGI research community.

In connection with AGI-08, we have put up a wiki page intended to
gather proposals and suggestions regarding the formation of a more
robust AGI community

http://www.agi-08.org/proposals.php

If any of y'all have relevant ideas, feel free to post them there.

I don't actually have a lot of time for community-building activities, as my
main focus is on Novamente LLC (and Novamente's work on AGI plus its
narrow-AI consulting work that pays my bills).  But, I try to make time for
community-building, because I think it's very important and will benefit
all of us working in the field.

I did read Kuhn back in college, and was impressed with his insight,
along with (even more so) that of Imre Lakatos, with his theory of
scientific research programmes.  In Lakatos's terms, what needs to be
done is to build a community that can turn AGI into an overall
progressive research program.  I discuss these philosophy of science
ideas a bit in the Hidden Pattern, and earlier in an essay

http://www.goertzel.org/dynapsyc/2004/PhilosophyOfScience_v2.htm

Further back, I remember when I was 5 years old, reading a draft of
a book my dad was writing (a textbook of Marxist sociology), and
encountering the word paradigm and not knowing what it meant.
As I recall, I asked him and he tried to explain and I did not understand
the explanation very well ;-p ... and truth be told, I still find it a fuzzy
term, preferring Lakatos's characterization of research programmes.
However, Kuhn had more insight than Lakatos into the sociological
dynamics surrounding scientific research programmes...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=85618740-3a68d2


Re: Yawn. More definitions of intelligence? [WAS Re: [agi] Ben's Definition of Intelligence]

2008-01-14 Thread Benjamin Goertzel
 Your job is to be diplomatic.  Mine is to call a spade a spade. ;-)


 Richard Loosemore

I would rephrase it like this: Your job is to make me look diplomatic ;-p

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=85657842-0be0ab


Re: [agi] Ben's Definition of Intelligence

2008-01-12 Thread Benjamin Goertzel
On definitions of intelligence, the canonical reference is

http://www.vetta.org/shane/intelligence.html

which lists 71 definitions.  Apologies if someone already pointed out
Shane's page in this thread, I didn't read every message carefully.

 An AGI definition of intelligence surely has, by definition! - to be
 general rather than complex and emphasize general
 problemsolving/learning. That seems to be what you actually mean.

Mike:
Obviously, my achieving complex goals in complex environments
definition is intended to include generality.  It could be rephrased as
effectively achieving a wide variety of complex goals in various
complex environments, with the general implicit in the wide.

I also gave a math version of the definition in 1993, which is
totally unambiguous due to being math rather than words.  I have
not bothered to look at the precise relations btw my older math
definition and Shane Legg and Marcus Hutter's more recent math
definition of intelligence.  They are not identical but have a similar
spirit.

 Intelligence has many dimensions. A crucial dimension of a true
 intelligence* is that it is general. It is a general problem-solver and
 general learner, able to solve, and learn how to solve,  problems in many,
 and potentially infinite, domains - *without* being specially preprogrammed
 for any one of them.  All computers to date have been specialists. The goal
 of Artificial General Intelligence is to create the first generalist.


The problem with your above definition is that it uses terms that are
themselves so extremely poorly-defined ;-)

Arguably it rules out the brain, which is heavily preprogrammed by
evolution in order to be good at certain things like vision, arm and
hand movement, social interaction, language parsing, etc.

And it does not rule out AIXItl type programs which achieve flexibility
trivially, at the cost of utilizing unacceptably much computational
resources...

The reality is that achieving general intelligence given finite resources
is probably always going to involve a combination of in-built
biases and general learning ability.

And where the line is drawn between in-built biases and
preprogramming is something that current comp/cog-sci does
not allow us to formally articulate in a really useful way.
This is  a subtle issue, as e.g.
a program for carrying out a specific task, coupled with a general-
purpose learner of the right level of capability, may in effect
serve as a broader inductive bias helping with a wider variety
of tasks.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=85169576-90c0ab


Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread Benjamin Goertzel
 I'll be a lot more interested when people start creating NLP systems
 that are syntactically and semantically processing statements about
 words, sentences and other linguistic structures and adding syntactic
 and semantic rules based on those sentences.

Depending on exactly what you mean by this, it's not a very far-off
thing, and there probably are systems that do this in various ways.

In a lexical grammar approach to NLP, most of the information about the
grammar is in the lexicon.  So all that's required for the system to
learn new syntactic rules is to make the lexicon adaptive.

For instance, in the link grammar framework, all that's required is for
the AI to be able to edit the link grammar dictionary, which tells the
syntactic link types associated with various words.  This just requires
a bit of abductive inference of the general form:

1)
I have no way to interpret sentence S syntactically, yet pragmatically I know
that sentence S is supposed to mean (set of logical relations) M

2)
If word W (in sentence S) had syntactic link type L attached to it, then
I could syntactically interpret sentence S to yield meaning M

3)
Thus, I abductively infer that W should have L attached to it
(with a certain level of probabilistic confidence)


There is nothing conceptually difficult here, and nothing beyond the
state of the art.  The link grammar exists (among other frameworks),
and multiple frameworks for abductive inference exist (including
Novamente's PLN framework).

The bottleneck is really the presence of data of type 1), i.e. of instances
in which the system knows what a sentence is supposed to mean even
though it can't syntactically parse it.

One way to get a system this kind of data is via embodiment.   But this is
not the only way.  It can also be done via pure conversation, for
example.

Suppose i'm talking to an AI, as follows:

AI: What's your name
Ben: I be Ben Goertzel
AI: What??
Ben: I am Ben Goertzel
AI: Thanks

Now, the AI may not know the grammatical rule needed to parse

I be Ben Goertzel

But, after the conversation is done, it knows that the meaning is
supposed to be equivalent to that of

I am Ben Goertzel

and thus it can edit it grammar (e.g. the link parser dictionary)
appropriately, in this case to incorporate the Ebonic grammatical
structure of be.

Another way to provide training of type 1) would be if the system
had a corpus of multiple different sentences all describing the
same thing -- wherein it could parse some of the sentences and
not others.

In short, I feel that adapting grammar rules based on experience
is not an extremely hard problem, though there are surely some
moderate-level hidden gotchas.  The bottlenecks in this regard
appear to be

-- getting the AI the experience

-- boring old systems integration


-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=84197075-424a2e


Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread Benjamin Goertzel
On Jan 10, 2008 10:26 AM, William Pearson [EMAIL PROTECTED] wrote:
 On 10/01/2008, Benjamin Goertzel [EMAIL PROTECTED] wrote:
   I'll be a lot more interested when people start creating NLP systems
   that are syntactically and semantically processing statements *about*
   words, sentences and other linguistic structures and adding syntactic
   and semantic rules based on those sentences.

 Note the new emphasis ;-) You example didn't have statements *about*
 words, but new rules were inferred from word usage.

Well, here's the thing.

Dictionary text and English-grammar-textbook text are highly ambiguous and
complex English... so you'll need a very sophisticated NLP system to be able
to grok them...

OTOH, you could fairly easily define a limited, controlled syntax encompassing
a variety of statements about words, sentences and other linguistic structures,
and then make a system add syntactic and semantic rules based on these
sentences.

But I don't see what the point would be, because telling the system
stuff in the
controlled syntax would be basically as much work as explicitly encoding
the rules...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=84320793-5fc1e6


Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread Benjamin Goertzel
Hi,

 Yes, the Texai implementation of Incremental Fluid Construction Grammar
 follows the phrase structure approach in which leaf lexical constituents are
 grouped into a structure (i.e. construction) hierarchy.  Yet, because it is
 incremental and thus cognitively plausible, it should scale to longer
 sentences better than any non-incremental alternative.

I agree that the incremental approach to parsing is the correct one,
as opposed to the whole sentence at once approach taken in the link
parser and most other parsers.

However, this is really a quite separate issue from the choice of hierarchical
phrase structure based grammar versus dependency grammar.  For instance,
Word Grammar is a dependency based approach that incorporates
incremental parsing (but has not been turned into a viable computational
system).

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=84321988-45541e


Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread Benjamin Goertzel
Do you plan to pay these non-experts, or recruit them as volunteers?

ben

On Jan 10, 2008 1:11 PM, Stephen Reed [EMAIL PROTECTED] wrote:

 Granted that from a logical viewpoint, using a controlled English syntax to
 acquire rules is as much work as explicitly encoding the rules.  However, a
 suitable, engaging, bootstrap dialog system may permit a multitude of
 non-expert users to add the rules, thus dramatically reducing the amount of
 programmatic encoding, and the duration of the effort.  That is my
 hypothesis and plan.

 -Steve

 Stephen L. Reed

 Artificial Intelligence Researcher
 http://texai.org/blog
 http://texai.org
 3008 Oak Crest Ave.
 Austin, Texas, USA 78704
 512.791.7860



 - Original Message 
 From: Benjamin Goertzel [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Thursday, January 10, 2008 11:06:45 AM
 Subject: Re: [agi] Incremental Fluid Construction Grammar released


  On Jan 10, 2008 10:26 AM, William Pearson [EMAIL PROTECTED] wrote:
  On 10/01/2008, Benjamin Goertzel [EMAIL PROTECTED] wrote:
I'll be a lot more interested when people start creating NLP systems
that are syntactically and semantically processing statements *about*
words, sentences and other linguistic structures and adding syntactic
and semantic rules based on those sentences.
 
  Note the new emphasis ;-) You example didn't have statements *about*
  words, but new rules were inferred from word usage.

 Well, here's the thing.

 Dictionary text and English-grammar-textbook text are highly ambiguous and
 complex English... so you'll need a very sophisticated NLP system to be able
 to grok them...

 OTOH, you could fairly easily define a limited, controlled syntax
 encompassing
 a variety of statements about words, sentences and other linguistic
 structures,
 and then make a system add syntactic and semantic rules based on these
 sentences.

 But I don't see what the point would be, because telling the system
 stuff in the
 controlled syntax would be basically as much work as explicitly encoding
 the rules...

 -- Ben

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


  

 Never miss a thing. Make Yahoo your homepage.
 

  This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=84346888-b3c207


Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread Benjamin Goertzel
On Jan 10, 2008 10:03 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 All this discussion of building a grammar seems to ignore the obvious fact
 that in humans, language learning is a continuous process that does not
 require any explicit encoding of rules.  I think either your model should
 learn this way, or you need to explain why your model would be more successful
 by taking a different route.  Explicit encoding of grammars has a long history
 of failure, so your explanation should be good.  At a minimum, the explanation
 should describe how humans actually learn language and why your method is
 better.

Matt,

If you read the paper at the top of this list

http://www.novamente.net/papers/

you will see a brief summary of the reasoning behind the approach I am
taking.  It is only 8 pages long so it should be quick to read, though
it obviously
does not explain all details in that length.

The abstract is as follows:

*
Abstract— Current work is described wherein simplified
versions of the Novamente Cognition Engine (NCE) are being
used to control virtual agents in virtual worlds such as game
engines and Second Life.  In this context, an IRC (imitation-
reinforcement-correction) methodology is being used to teach
the agents various behaviors, including simple tricks and
communicative acts.   Here we describe how this work may
potentially be exploited and extended to yield a pathway
toward giving the NCE robust, ultimately human-level natural
language conversation capability.  The  pathway starts via
using the current system to instruct NCE-controlled agents in
semiosis and gestural communication; and then continues via
integration of a particular sort of hybrid rule-based/statistical
NLP system (which is currently partially complete) into the
NCE-based virtual agent system, in such a way as to allow
experiential adaptation of the rules underlying the NLP system,
*

I do not think that a viable design for an AGI needs to include a description of
human learning (of language or anything else).  No one understands exactly
how the human brain works yet, but that doesn't mean we can't potentially
have success with non-brain-emulating AGI approaches.

My favorite theorists of human language are Richard Hudson (see his 2007
book Language Networks) and Tomassello (see his book Constructing a
Language).  I actually believe my approach to language in AGI is quite
close to their ideas.  But I don't have time/space to justify this statement in
an email.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=84570254-afda8d

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-09 Thread Benjamin Goertzel
 And how would a young child or foreigner interpret on the Washington
 Monument or shit list?  Both are physical objects and a book *could* be
 resting on them.

Sorry, my shit list is purely mental in nature ;-) ... at the moment, I maintain
 a task list but not a shit list... maybe I need to get better organized!!!

 Ben, your question is *very* disingenuous.

Who, **me** ???

There is a tremendous amount of
 domain/real-world knowledge that is absolutely required to parse your
 sentences.  Do you have any better way of approaching the problem?

 I've been putting a lot of thought and work into trying to build and
 maintain precedence of knowledge structures with respect to disambiguating
 (and overriding incorrect) parsing . . . . and don't believe that it's going
 to be possible without a severe amount of knwledge . . . .

 What do you think?

OK...

Let's assume one is working within the scope of an AI system that
includes an NLP parser,
a logical knowledge representation system, and needs some intelligent way to map
the output of the latter into the former.

Then, in this context, there are three approaches, which may be tried
alone or in combination:

1)
Hand-code rules to map the output of the parser into a much less
ambiguous logical format

2)
Use statistical learning across a huge corpus of text to somehow infer
these rules
[I did not ever flesh out this approach as it seemed implausible, but
I have to recognize
its theoretical possibility]

3)
Use **embodied** learning, so that the system can statistically infer
the rules from the
combination of parse-trees with logical relationships that it observes
to describe
situations it sees
[This is the best approach in principle, but may require years and
years of embodied
interaction for a system to learn.]


Obviously, Cycorp has taken Approach 1, with only modest success.  But
I think part of
the reason they have not been more successful is a combination of a
bad choice of
parser with a bad choice of knowledge representation.  They use a
phrase structure
grammar parser and predicate logic, whereas I believe if one uses a dependency
grammar parser and term logic, the process becomes a lot easier.  So
far as I can tell,
in texai you are replicating Cyc's choices in this regard (phrase
structure grammar +
predicate logic).

In Novamente, we are aiming at a combination of the 3 approaches.

We are encoding a bunch of rules, but we don't ever expect to get anywhere near
complete coverage with them, and we have mechanisms (some designed, some
already in place) that can
generalize the rule base to learn new, probabilistic rules, based on
statistical corpus
analysis and based on embodied experience.

In our rule encoding approach, we will need about 5000 mapping rules to map
syntactic parses of commonsense sentences into term logic relationships.  Our
inference engine will then generalize these into hundreds of thousands
or millions
of specialized rules.

This is current work, research in progress.

We have about 1000 rules in place now and will soon stop coding them and start
experimenting with using inference to generalize and apply them.  If
this goes well,
then we'll put in the work to encode the rest of the rules (which is
not very fun work,
as you might imagine).

Emotionally and philosophically, I am more drawn to approach 3 (embodied
learning), but pragmatically, I have reluctantly concluded that the
hybrid approach
we're currently taking has the greatest odds of rapid success.

In the longer term, we intend to throw out the standalone grammar parser we're
using and have syntax parsing done via our core AI processing -- but we're now
using a standalone grammar parser as a sort of scaffolding.

I note that this is not the main NM RD thrust right now -- it is at
the moment somewhat
separate from our work on embodied imitative/reinforcement/corrective
learning of
virtual agents.  However, the two streams of work are intended to come
together, as
I've outlined in my paper for WCCI 2008,

http://www.goertzel.org/new_research/WCCI_AGI.pdf

-- Ben


-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=83967477-a9e1c4


Re: [agi] Incremental Fluid Construction Grammar released

2008-01-09 Thread Benjamin Goertzel
A perhaps nicer example is

Get me the ball

for which RelEx outputs

definite(ball)
singular(ball)
imperative(get)
singular(me)
definite(me)
_obj(get, me)
_obj2(get, ball)

and RelExToFrame outputs

Bringing:Theme(get,me)
Bringing:Beneficiary(get,me)
Bringing:Theme(get,ball)
Bringing:Agent(get,you)

Note that the RelEx output is already abstracted
and semantified compared to what comes out of
a grammar parser.

-- Ben  

On Jan 9, 2008 5:59 PM, Benjamin Goertzel [EMAIL PROTECTED] wrote:
 
  Can you give about ten examples of rules?  (That would answer a lot of my
  questions above)

 That would just lead to really long list of questions that I don't have time 
 to
 answer right now

 In a month or two, we'll write a paper on the rule-encoding approach we're
 using, and I'll post it to the list, which will make this approach clearer.

  Where did you get the rules?  Did you hand-code them or get them from
  somewhere?

 As you know we have a system called RelEx that transforms the output of
 the link parser into higher-level semantic relationships.

 We then have a system of rules that map RelEx output into a set of
 frame-element relationships constructed mostly based on FrameNet.

 For the sentence

 Ben kills chickens

 RelEx outputs

 _obj(kill, chicken)
 present(kill)
 plural(chicken)
 uncountable(Ben)
 _subj(kill, Ben)

 and the RelExToFrame rules output

 Killing:Killer(kill,Ben)
 Killing:Victim(kill,chicken)
 Temporal_colocation:Event(present,kill)

 But I really don't have time to explain all the syntax and notation in
 detail... if it's not transparent...

 And I want to stress that I consider this kind of system pretty
 useless on its own, it's only potentially valuable if coupled with
 other components like we have in Novamente, such as an uncertain
 inference engine and an embodied learning system...

 Such rules IMO are mainly valuable to give a starting-point to a
 learning system, not as the sole or primary cognitive material of an
 AI system.  And using them as a starting-point requires very careful
 design...

 The 5000 rules figure is roughly rooted in the 825 frames in FrameNet;
 each frame corresponds to a number of rules, most of which are related
 to specific verb/preposition combinations.

 Another way to look at it is that each rule corresponds roughly to a
 Lojban word/argument combination... pretty much, FrameNet and the
 Lojban dictionary are doing the same thing, which is to precisely
 specify commonsense subcategorization frames.

 -- Ben


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=83993607-803936


Re: [agi] Incremental Fluid Construction Grammar released

2008-01-09 Thread Benjamin Goertzel

 Can you give about ten examples of rules?  (That would answer a lot of my
 questions above)

That would just lead to really long list of questions that I don't have time to
answer right now

In a month or two, we'll write a paper on the rule-encoding approach we're
using, and I'll post it to the list, which will make this approach clearer.

 Where did you get the rules?  Did you hand-code them or get them from
 somewhere?

As you know we have a system called RelEx that transforms the output of
the link parser into higher-level semantic relationships.

We then have a system of rules that map RelEx output into a set of
frame-element relationships constructed mostly based on FrameNet.

For the sentence

Ben kills chickens

RelEx outputs

_obj(kill, chicken)
present(kill)
plural(chicken)
uncountable(Ben)
_subj(kill, Ben)

and the RelExToFrame rules output

Killing:Killer(kill,Ben)
Killing:Victim(kill,chicken)
Temporal_colocation:Event(present,kill)

But I really don't have time to explain all the syntax and notation in
detail... if it's not transparent...

And I want to stress that I consider this kind of system pretty
useless on its own, it's only potentially valuable if coupled with
other components like we have in Novamente, such as an uncertain
inference engine and an embodied learning system...

Such rules IMO are mainly valuable to give a starting-point to a
learning system, not as the sole or primary cognitive material of an
AI system.  And using them as a starting-point requires very careful
design...

The 5000 rules figure is roughly rooted in the 825 frames in FrameNet;
each frame corresponds to a number of rules, most of which are related
to specific verb/preposition combinations.

Another way to look at it is that each rule corresponds roughly to a
Lojban word/argument combination... pretty much, FrameNet and the
Lojban dictionary are doing the same thing, which is to precisely
specify commonsense subcategorization frames.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=83995036-45a6ce


Re: [agi] Incremental Fluid Construction Grammar released

2008-01-09 Thread Benjamin Goertzel
Processing a dictionary in a useful way
requires quite sophisticated language understanding ability, though.

Once you can do that, the hard part of the problem is already
solved ;-)

Ben

On Jan 9, 2008 7:22 PM, William Pearson [EMAIL PROTECTED] wrote:

 On 09/01/2008, Benjamin Goertzel [EMAIL PROTECTED] wrote:
  Let's assume one is working within the scope of an AI system that
  includes an NLP parser,
  a logical knowledge representation system, and needs some intelligent way 
  to map
  the output of the latter into the former.
 
  Then, in this context, there are three approaches, which may be tried
  alone or in combination:
 
  1)
  Hand-code rules to map the output of the parser into a much less
  ambiguous logical format
 
  2)
  Use statistical learning across a huge corpus of text to somehow infer
  these rules
  [I did not ever flesh out this approach as it seemed implausible, but
  I have to recognize
  its theoretical possibility]
 
  3)
  Use **embodied** learning, so that the system can statistically infer
  the rules from the
  combination of parse-trees with logical relationships that it observes
  to describe
  situations it sees
  [This is the best approach in principle, but may require years and
  years of embodied
  interaction for a system to learn.]
 

 Isn't there a 4th potential one? I would define the 4th as being something 
 like

 4) Use a language that can describe itself to bootstrap quickly new
 phrase usage. These can be seen in humans when processing
 dictionary/thesaurus like statements or learning a new language.

 The following paragraphs can be seen as examples of sentances that
 would need this kind of system to deal with and make use of the
 information in them:

 The word, on, can be used in many different situations. One of these
 is to imply one thing is above another and supported by it.

 The prefix dis can mean apart or break apart. Enchant can mean to take
 control by magical means. What might disenchant mean?  *

 ---End examples

 It requires the system to be able to process this statement then add
 the appropriate rules. It may be tentative in keeping or using the
 rules, gathering information on how useful it finds it while
 processing text. It is different from handcoding, because it should
 enable anyone to add rules after a minimal set of language description
 language has been added.

 It should be combined with 3 however, so that rules don't always need
 to be given explicitly. I think this type of learning/instruction has
 the ability to be a lot quicker than any system that mainly relies on
 inference.

 I don't know of systems that are using this sort of thing. And it is a
 bit above the level I am working at, at the moment. Anyone know of
 systems that parse and then use sentances in this fashion?

   Will Pearson

 * I'm unsure how much work people are doing on the use of prefixes and
 suffixes to infer the meaning/usage of new words. I certainly use it a
 lot myself.

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=84074230-e1fae9


Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-07 Thread Benjamin Goertzel
On Jan 7, 2008 9:12 AM, Mike Tintner [EMAIL PROTECTED] wrote:


 Robert,

 Look, the basic reality is that computers have NOT yet been creative in any
 significant way, and have NOT yet achieved AGI - general intelligence, - or
 indeed any significant rulebreaking adaptivity; (If you disagree, please
 provide examples. Ben keeps claiming/implying he's solved them or made
 significant advances, but when pressed never provides any indication of
 how).

We all agree that AGI is not yet achieved.

Space travel to Proxima Centauri is also not yet achieved, nor is human
cloning ... there is a big difference in science between

-- not yet achieved, but seems possible based on available knowledge

and

-- doesn't seem possible based on available knowledge

 If you are truly serious about solving these problems, I suggest, you should
 prepared to be hurt - you should be ready to consider truly radical ideas
 - for the ground on which you stand to be questioned - and be seriously
 shaken up. You should WELCOME any and all of your assumptions being
 questioned. Even if, let's say, what I or someone else suggests is in the
 end nutty, drastic ideas are good for you to contemplate at least for a
 while.

Most of us on this list are already aware of the possibility that it
is not possible
to achieve high levels of intelligence using digital computer programs, given
realistic space and time constraints.

It is scientifically possible that Penrose is right, and to achieve human-like
levels of intelligence in a machine, one needs to use a machine making use
of weird, as yet poorly understood quantum gravity effects.

However, at present, that Penrose-ean hypothesis does not seem that likely
to most of us on this list; and given the current state of science, it's not a
hypothesis that we really can explore in detail.  Quantum gravity is
in a confused
state and quantum computing (let alone quantum gravity computing) is
in its infancy.

There is also always the possibility that the whole modern scientific world-view
is deeply flawed in a way that is relevant to AGI.  Maybe digital computers are
unable to lead to human-level AI, for some reason totally unrelated to
computability
theory and quantum gravity and all that.  There is plenty in the world
that we don't
understand -- I recommend Damien Broderick's recent and excellent book
Outside the Gates of Science for anyone who doesn't agree

But, this list is devoted to exploring the hypothesis that AGI **can**
be achieved
via creating intelligent machines -- and mainly, at the moment, to the
hypothesis that
it can be achieved via creating intelligent digital computer programs.

We realize this hypothesis may be wrong, but it seems likely enough to
us to merit
a lot of attention and effort aimed at validation.

Your supposed arguments against the hypothesis are nowhere near as original
as you seem to think, and nearly everyone on this list has heard them before and
not found them convincing.  I read What Computers Can't Do by Hubert Dreyfus
as a child in the 1970's and your diatribes don't seem to add anything to what
he said there.

If you think the whole digital-computer-AGI pursuit is a wrong
direction and a waste
of time, that's fine.  But why do you feel the need to keep repeatedly
informing us
of this fact?

For instance, I think string theory is probably wrong.  But I don't
see any point in
spending my time trolling on string theory email lists and harping on this point
repeatedly and confusingly.  Let them explore their hypothesis...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82594941-c3bbc7


Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-07 Thread Benjamin Goertzel
On Jan 7, 2008 12:08 PM, David Butler [EMAIL PROTECTED] wrote:
 Would two AGI's with the same initial learning program, same hardware in a
 controlled environment (same access to a specific learning base-something
 like an encyclopedia) learn at different rates and excel in different tasks?

Yes ...

Even in the extreme case of identical external stimuli, two AGI systems could
evolve slightly differently due to consequences of rounding error.

However, if the AGI systems were built carefully enough (so as not to
be susceptible
to rounding error or other related phenomena) it could be made so that
with totally
identical environments they were totally identical in behavior, so
long as no hardware
failures occurred.

(I note though that minor hardware failures like small defects in RAM
or disk could
always intervene and play the same role as roundoff error, potentially
setting the
two AGIs with identical code and identical environmental stimuli on different
courses.)

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82647670-987d16


Re: [agi] Re: AGI-08 - Call for Participation

2008-01-07 Thread Benjamin Goertzel
Nothing of that nature is planned at present ... as we the conference organizers
are rather busy with other stuff, we've been pretty much fully whelmed with the
organization of the First Life conference...

It might be fun to do an in-world AGI meet-up a couple weeks after AGI-08, with
an aim of discussing the AGI-08 papers... not sure how many paper authors
would show up, though...

-- Ben

On Jan 7, 2008 2:07 PM, Bob Mottram [EMAIL PROTECTED] wrote:
 Will there be any AGI08 activities in Second Life?




 On 07/01/2008, Bruce Klein [EMAIL PROTECTED] wrote:
  quick agi-08 update...
 
  all 49 papers are now online and reg is open:
   www.agi-08.org/papers
   www.agi-08.org/register
 
   promo video:
   www.agi-08.org/video
 
 
  On Dec 11, 2007 1:51 PM, Bruce Klein [EMAIL PROTECTED] wrote:
   The First Conference on Artificial General Intelligence (AGI-08)
   March 1-3, 2008 at Memphis, Tennessee, USA
   Early Registration Deadline: January 31, 2008
   Conference Website: http://www.agi-08.org
  
   Artificial General Intelligence (AGI) research focuses on the original
   and ultimate goal of AI --- to create intelligence as a whole. AGI seeks
   to create software or hardware systems that are generally intelligent
   in roughly the same sense that humans are, rather than being specialized
   problem-solvers such as most of the systems currently studied in the AI
   field.
  
   Current research in the AGI field is vigorous and diverse, exploring a
   wide range of possible paths, including theoretical and experimental
   computer science, cognitive science, neuroscience, and innovative
   interdisciplinary methodologies.
  
   AGI-08 is the very first international conference in this emerging field
   of science and engineering. The conference is organized with the
   cooperation of AAAI, and welcomes researchers and students in all
   relevant disciplines.
  
   Different from conventional conferences, AGI-08 is planned to be
   intensively discussion oriented. All the research papers accepted for
   publication in the Proceedings (49 papers total) will be available in
   advance online, so that attendees may arrive prepared to discuss the
   relevant issues with the authors and each other. The sessions of the
   conference will be organized to facilitate open and informed
   intellectual exchange on themes of common interest.
  
   Besides the technical discussions, time will also be scheduled at AGI-08
   for an exploratory discussion of possible ways to work toward the
   formation of a more cohesive AGI research community -- including future
   conferences, publications, organizations, etc.
  
   After the two-and-half day conference, there will be a half day workshop
   on the broader implications of AGI technology, including ethical,
   sociological and futurological considerations.
  
   Yours,
  
   Organizing Committee, AGI-08
   http://www.agi-08.org/organizing.php
  
 
   
   This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82707146-f12793


Re: [agi] Re: AGI-08 - Call for Participation

2008-01-07 Thread Benjamin Goertzel
I'll forward this request to those who will be handling such things...

thx
ben

On Jan 7, 2008 3:35 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 Ben,

 I'm certainly not in position to ask for it, but if it's possible, can
 some kind of microphones be used during presentations on agi-08 (if
 someone is going to film it)? Audio was very poor in videos from
 previous events.

 --
 Vladimir Nesovmailto:[EMAIL PROTECTED]

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82759215-d609f4


Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Benjamin Goertzel
On Jan 5, 2008 10:52 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 I think I've found a simple test of cog. sci.

 I take the basic premise of cog. sci. to be that the human mind - and
 therefore its every activity, or sequence of action - is programmed.

No.  This is one perspective taken by some cognitive scientists.  It does
not characterize the field.

 (This has huge implications for AGI - you guys believe that an AGI must be
 programmed for its activities, I contend that free composition instead is
 essential for truly adaptive, general intelligence and is the basis of all
 animal and human activities).

Spontaneous, creative self-organized activity is a key aspect of Novamente
and many other AGI designs.

 So how to test cog sci? I contend that the proper, *ideal* test is to record
 humans' actual streams of thought about any problem - like, say, writing an
 essay - and even just a minute's worth will show that, actually, humans have
 major difficulties following anything like a joined-up, rational train of
 thought - or any stream that looks remotely like it could be programmed
 overall.

A)
While introspection is certainly a valid and important tool for inspiring
work in AI and cog sci, it is not a test of anything.  There is much empirical
evidence showing that humans' introspections of their own cognitive
processes are highly partial and inaccurate.

For instance, if we were following the arithmetic algorithms that we think
we are, there is no way the timing of our responses when solving arithmetic
problems would come out the way they actually do.  (I don't have the references
for this work at hand, but I saw it years ago in the Journal of Math Psych I
believe.)

B)
Whether something looks like it's following a simple set of rules
doesn't mean much.  Chaotic underlying dynamics can give rise to
high-level orderly behavior; and simple systems of rules can give rise
to apparently disorderly, incomprehensibly complex behaviors.  Cf
the whole field of complex-systems dynamics.


-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82365583-966081


Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Benjamin Goertzel
I don't really understand what you mean by programmed ... nor by creative

You say that, according to your definitions, a GA is programmed and
ergo cannot be creative...

How about, for instance, a computer simulation of a human brain?  That
would be operated via program code, hence it would be programmed --
so would you consider it intrinsically noncreative?

Could you please define your terms more clearly?

thx
ben

On Jan 6, 2008 1:21 PM, Mike Tintner [EMAIL PROTECTED] wrote:

 MT: This has huge implications for AGI - you guys believe that an AGI must
 be
  programmed for its activities, I contend that free composition instead is
  essential for truly adaptive, general intelligence and is the basis of
  all
  animal and human activities).
 
 Ben:  Spontaneous, creative self-organized activity is a key aspect of
 Novamente
  and many other AGI designs.

 Ben,

 You are saying that your pet presumably works at times in a non-programmed
 way - spontaneously and creatively? Can you explain briefly the
 computational principle(s) behind this, and give an example of where it's
 applied, (exploration of an environment, say)? This strikes me as an
 extremely significant, even revolutionary claim to make, and it would be a
 pity if, as with your analogy claim, you simply throw it out again without
 any explanation.

 And I'm wondering whether you are perhaps confused about this, (or I have
 confused you) -  in the way you definitely are below. Genetic algorithms,
 for example, and suchlike classify as programmed and neither truly
 spontaneous nor creative.

 Note that Baum asked me a while back what  test I could provide that humans
 engage in free thinking.  He, quite rightly, thought it a scientifically
 significant claim to make, that demanded scientific substantiation.

 My test is not a test, I stress though, of  free will. But have you changed
 your mind about this? It's hard though not a complete contradiction  to
 believe in a mind being spontaneously creative and yet not having freedom of
 decision.

 MT:  I contend that the proper, *ideal* test is to record
  humans' actual streams of thought about any problem
 
 Ben:  While introspection is certainly a valid and important tool for
 inspiring
  work in AI and cog sci, it is not a test of anything.  

 Ben,

 This is a really major - and very widespread - confusion.  A recording of
 streams of thought is what it says - a direct or recreated recording of a
 person's actual thoughts. So, if I remember right, some form of that NASA
 recording of subvocalisation when someone is immediately thinking about a
 problem, would classify as a record of their thoughts.

 Introspection is very different - it is a report of thoughts, remembered at
 a later, often much later time.

 A record(ing) might be me saying I want to kill you, you bastard  in an
 internal daydream. Introspection might be me reporting later: I got very
 angry with him in my mind/ daydream. Huge difference. An awful lot of
 scientists think, quite mistakenly, that the latter is the best science can
 possibly hope to do.

 Verbal protocols - getting people to think aloud about problems - are a sort
 of halfway house (or better).





 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82398434-a3e5d5


Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Benjamin Goertzel
On Jan 6, 2008 4:00 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Ben,

 Sounds like you may have missed the whole point of the test - though I mean
 no negative comment by that - it's all a question of communication.

 A *program* is a prior series or set of instructions that shapes and
 determines an agent's sequence of actions. A precise itinerary for a
 journey. Even if the programmer doesn't have a full but only a very partial
 vision of that eventual sequence or itinerary.  (The agent of course can be
 either the human mind or a computer).

OK, then any AI that is implemented in computer software is by your
definition a programmed AI.  Whether it is based on GA's, neural nets,
logical theorem-proving or whatever.

So, is your argument that digital computer programs can never be creative,
since you have asserted that programmed AI's can never be creative?

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82448475-4978a0


Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Benjamin Goertzel
Mike,

 The short answer is that I don't believe that computer *programs* can be
 creative in the hard sense, because they presuppose a line of enquiry, a
 predetermined approach to a problem -
...
 But I see no reason why computers couldn't be briefed rather than
 programmed, and freely associate across domains rather than working along
 predetermined lines.

But the computer that is being briefed is still running some software program,
hence is still programmed -- and its responses are still determined by
that program (in conjunction w/ the environment, which however it perceives
only thru a digital bit stream)

 I don't however believe that purely *digital* computers are capable of all
 the literally imaginative powers (as already discussed elsewhere) that are
 also necessary for true creativity and general intelligence.

I don't know how you define a literally imaginative power.

So, it seems like you are saying

-- digital computer software can never truly be creative or possess general
intelligence

Is this your assertion?

It is not an original one of course: Penrose, Dreyfus and many others have
argued the same point.   The latter paragraph of yours I've quoted could
be straight out of The Emeperor's New Mind by Penrose.

Penrose then notes that quantum computers can compute only the same
stuff that digital computers can; so he posits that general intelligence is
possible only for quantum gravity computers, which is what he posits
the brain is.

I think Penrose is most probably wrong, but at least I understand what
he is saying...

I'm just trying to understand what your perspective actually is...

thx
Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82464788-e73a96


Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Benjamin Goertzel
If you believe in principle that no digital computer program can ever
be creative, then there's no point in me or anyone else rambling on at
length about their own particular approach to digital-computer-program
creativity...

One question I have is whether you would be convinced that digital
programs ARE capable of true creativity, by any possible actual achievements
of digital computer programs...

If a digital computer program made a great painting, wrote a great novel,
proved a great theorem, patented dozens of innovative inventions, etc. --
would you be willing to admit it's creative, or would you argue that due to
its digital nature, it must have achieved these things in a noncreative
way?

Ben

On Jan 6, 2008 6:58 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Well we (Penrose  co) are all headed in roughly the same direction, but
 we're taking different routes.

 If you really want the discussion to continue, I think you have to put out
 something of your own approach here to spontaneous creativity (your terms)
 as requested.

 Yes, I still see the mind as following instructions a la briefing, but
 only odd ones, not a whole rigid set of them., a la programs. And the
 instructions are open-ended and non-deterministically open to
 interpretation, just as my briefing/instruction to you - Ben go and get me
 something nice for supper - is. Oh, and the instructions that drive us,
 i.e. emotions, are always conflicting, e.g [Ben:] I might like to.. but do
 I really want to get that bastard anything for supper? Or have the time to,
 when I am on the very verge of creating my stupendous AGI?

 Listen, I can go on and on - the big initial deal is the claim that the mind
 isn't -  no successful AGI can be - driven by a program, or thoroughgoing
 SERIES/SET of instructions - if it is to solve even minimal general
 adaptive, let alone hard creative problems. No structured approach will work
 for an ill-structured problem.

 You must give some indication of how you think a program CAN be generally
 adaptive/ creative - or, I would argue, squares (programs are so square,
 man) can be circled :).


  Mike,
 
  The short answer is that I don't believe that computer *programs* can be
  creative in the hard sense, because they presuppose a line of enquiry, a
  predetermined approach to a problem -
  ...
  But I see no reason why computers couldn't be briefed rather than
  programmed, and freely associate across domains rather than working along
  predetermined lines.
 
  But the computer that is being briefed is still running some software
  program,
  hence is still programmed -- and its responses are still determined by
  that program (in conjunction w/ the environment, which however it
  perceives
  only thru a digital bit stream)
 
  I don't however believe that purely *digital* computers are capable of
  all
  the literally imaginative powers (as already discussed elsewhere) that
  are
  also necessary for true creativity and general intelligence.
 
  I don't know how you define a literally imaginative power.
 
  So, it seems like you are saying
 
  -- digital computer software can never truly be creative or possess
  general
  intelligence
 
  Is this your assertion?
 
  It is not an original one of course: Penrose, Dreyfus and many others have
  argued the same point.   The latter paragraph of yours I've quoted could
  be straight out of The Emeperor's New Mind by Penrose.
 
  Penrose then notes that quantum computers can compute only the same
  stuff that digital computers can; so he posits that general intelligence
  is
  possible only for quantum gravity computers, which is what he posits
  the brain is.
 
  I think Penrose is most probably wrong, but at least I understand what
  he is saying...
 
  I'm just trying to understand what your perspective actually is...
 
 - Release Date: 1/5/2008 11:46 AM
 
 


 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82484935-6a7f84


Re: [agi] NL interface

2007-12-30 Thread Benjamin Goertzel
Matt,

I agree w/ your question...

I actually think KB's can be useful in principle, but I think they
need to be developed
in a pragmatic way, i.e. where each item of knowledge added can be validated via
how useful it is for helping a functional intelligent agent to achieve
some interesting
goals...

ben g


 What would you do with the knowledge base after you build it?  I know this
 sounds like a dumb question, but Cyc has built a huge base of common sense
 knowledge in a structured format, but it isn't useful for anything.  Of course
 that is not the result they anticipated.  How will you avoid the same type of
 (very expensive) failure?  What type of knowledge will it contain?


 -- Matt Mahoney, [EMAIL PROTECTED]

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=80324636-31670d


Re: [agi] OpenCog

2007-12-28 Thread Benjamin Goertzel
On Dec 28, 2007 5:59 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:

 OpenCog is definitely a positive thing to happen in the AGI scene.  It's
 been all vaporware so far.

Yes, it's all vaporware so far ;-)

On the other hand, the code we hope to release as part of OpenCog actually
exists, but it's not yet ready for opening-up as some of it needs to
be extracted from
the overall Novamente code base, and other parts of it need to be cleaned-up
in various ways...

Much of the reason for yakking about it months in advance of releasing it, was a
desire to assess the level of enthusiasm for it.  There are a number
of enthusiastic
potential OpenCog developers on the OpenCog mail list, so in that regard, I feel
the response has been enough to merit proceeding with the project...


 I wonder what would be the level of participation?

Time will tell!

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=79870666-e314ea


Re: [agi] OpenCog

2007-12-28 Thread Benjamin Goertzel
On Dec 28, 2007 8:28 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Benjamin Goertzel wrote:
  I wish you much luck with your own approach   And, I would imagine
  that if you create a software framework supporting your own approach
  in a convenient way, my own currently favored AI approaches will not
  be conveniently explorable within it.  That's the nature of 
  framework-building.

 Actually, that would be a serious miusunderstanding of the framework and
 development environment that I am building.  Your system would be just
 as easy to build as any other.

 My purpose is to create a description language that allows us to talk
 about different types of AGI system, and then construct design
 variations autonmatically.

I don't believe it is possible to create a framework that both

a) is unbiased regarding design type

b) makes it easy to construct AGI designs

Just as different programming languages are biased toward different types
of apps, so with different AGI frameworks...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=79885135-d592af


Re: [agi] OpenCog

2007-12-27 Thread Benjamin Goertzel
Loosemore wrote:
 I am sorry, but I have reservations about the OpenCog project.

 The problem of building an open-source AI needs a framework-level tool
 that is specifically designed to allow a wide variety of architectures
 to be described and expressed.

 OpenCog, as far as I can see, does not do this, but instead takes a
 particular assortment of mechanisms as its core, then suggests that
 people add modules onto this core.  This is not a framework-level
 approach, but a particular-system approach that locks all future work
 into the limitations of the initial core.

 For example, I have many, many AGI designs that I need to explore, but
 as far as I can see, none of them can be implemented at all within the
 OpenCog system.  I would have to rewrite OpenCog completely to get it to
 meet my needs.

Hi Richard,

To be sure, OpenCog is not intended to be equally useful for all possible
AGI approaches.

To provide something equally useful for all AGI approaches, one would
need to make something extremely broad -- basically, one would need to
make a highly general-purpose
 operating-system and/or programming-language, rather than
a specific software framework.

OpenCog is designed to support a certain family of AGI designs, but
is not designed to conveniently support all possible AGI designs.

Definitely, there is room in the world for more than one AGI framework.

As an example the CCortex platform seems like it may be a good
framework within which to build biologically realistic NN based AGI
systems (note, this is based on their literature only, I've never tried
their system).

I wish you much luck with your own approach   And, I would imagine
that if you create a software framework supporting your own approach
in a convenient way, my own currently favored AI approaches will not
be conveniently explorable within it.  That's the nature of framework-building.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=79828215-b4b8b5


[agi] OpenCog

2007-12-27 Thread Benjamin Goertzel
Re the recent discussion of OpenCog -- this recent post I made
to the OpenCog mailing list may perhaps help clarify the
intentions underlying the project further.

-- Ben


-- Forwarded message --
From: Benjamin Goertzel [EMAIL PROTECTED]
Date: Dec 27, 2007 11:07 AM
Subject: Re: Project Questions
To: [EMAIL PROTECTED]


On Dec 27, 2007 9:27 AM, Pat [EMAIL PROTECTED] wrote:

 Ben,

 Thank you for the link to the paper. My thinking has been that in
 order to develop a machine capable of general intelligence, a
 specification would need to be developed which outlines the
 functionality of a thinking machine, separate from any implementation
 issues.

I totally agree -- and,
I have tried to do that in a 350-page manuscript, which I plan to
release online sometime in the first half of 2008.

HOWEVER, even though I am a big fan of my AI design, it's obvious
that  there are going to be many lessons learned during the course of
working out more detailed designs of subcomponents and experimenting
with implementations.

This is a useful conversation, because I'm seeing that it in talking about
OpenCog it will be valuable to distinguish

-- OpenCog core
-- Specific AGI designs that can be built on the OpenCog core,
generally in a modular fashion (each AGI system comprising a certain
set of MindAgents and a certain set of functional units)

The OpenCog core may be used for a load of different AGI designs

My own AGI design is one particular design that can be built on the
OpenCog core.

The AGI design that I will advocate for building on top of the OpenCog
core is a variant of the Novamente design, I'm not sure what to call it,
but it will get a name before the OpenCog launch...

However, I also want to explicitly encourage the creation of other AGI
designs on top of OpenCog.  Hopefully there can be crosspollination
of different approaches.

 I can see where your approach of taking diverse contributions of
 software and integrating them around a framework could be instrumental
 in the exploration and discovery of the specification (among other
 benefits).

As I hope I've clarified above,

-- I do have a fairly precise specification I'm interested in using OpenCog
to explore, which is closely related (but not identical) to my Novamente
specification

-- However, I don't intend OpenCog to be restricted to the implementation.
exploration of this specification of mine

Thanks for your questions, they are certainly good ones...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=79828670-a37b2c


Re: [agi] BMI/BCI Growing Fast

2007-12-26 Thread Benjamin Goertzel
 I think that at first sight this goes to support my position in the original
 argument with Ben- namely that there are all kinds of ways to get at or read
 minds, and there is now an increasing momentum to do that.

Being able to read the stream of subvocalizations coming out from a person's
mind, is a very very very long way from being able to read the
internal cognitive
dynamics of a person's thoughts.  And, the technology used for the former
does not seem capable of being incrementally extended to do the latter.

I agree that mind-reading will happen, and that the pace of growth of
mind-reading
related technologies is exponential.  But still you seem to be a bit
overoptimistic
about the value of the exponent.  (Although, I don't discount the possibility of
some wild outlier innovation coming along...)

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=79286730-5a7369


[agi] AGI, NLP, embodiment and gesture

2007-12-26 Thread Benjamin Goertzel
Hi all,

Here you'll find a paper

http://goertzel.org/new_research/WCCI_AGI.pdf

that I've submitted to the WCCI 2008 Special Session
on Human-level AI.

It tries to summarize the big picture about how advanced
AI can be achieved via synthesizing NLP and virtual embodiment...

The paper refers to another paper

An Integrative Methodology for Teaching Embodied Non-Linguistic Agents

which is linked to from

http://www.agi-08.org/schedule.php

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=79438803-5dc039


[agi] Mizar translated to TPTP !

2007-12-22 Thread Benjamin Goertzel
For those interested in automated theorem-proving,
I'm pleased to announce a major advance in tools
has occurred...

The Mizar library of formalized math has finally
been translated into a sensible format, usable
for training automated theorem-proving systems ;-)

Josef Urban informed me that

*

I am now writing an extended version of a paper describing export of
Mizar proofs to TPTP (see
http://www.springerlink.com/content/t88848500815t188/ for the conference
version), and have just remembered your request for having Mizar proofs
exported to KIF or TPTP. So yes, I have translated it all to TPTP
derivations, which can be ATP-verified by tools like GDV (see the paper).
The page which presents it all is http://www.cs.miami.edu/~tptp/MizarTPTP/
, the TSTP icons there give you the TPTP derivations corresponding to each
Mizar proof (see
http://www.activemath.org/workshops/MathUI/07/proceedings/Urban-etal-MizarIDV-MathUI07.pdf
for explanation of the site). If you want tar.gz of all the proofs, some
version is at http://lipa.ms.mff.cuni.cz/~urban/nd_problems1.tar.gz (watch
out, it is huge). There are still some completness bugs, the biggest is
lack of explicit arithmetic evaluations done by mizar (inferences like
4+5=9 that dumb FOL ATP system will not prove), but otherwise it mostly
works (see the paper).

*

TPTP, unlike Mizar syntax, is a small and straightforward BNF grammar which
should be translatable into the internal KR of any AI system with a
formal-logic-based
aspect, with only moderate effort.

So we can now import all of undergrad math and some grad-level math into
AI systems, which provides the possibility of doing inductive
reasoning on a large
corpus of proofs to teach AI systems how to prove theorems...

-- Ben G


On Mon, 19 Mar 2007, Josef Urban wrote:



 Ben,

 the MPTP export keeps at this moment quite a lot of the original Mizar proof
 structure in the TPTP 'useful info' slot, but not all of it. Depending on
 what kind of learning you want to do, it might or might not suffice. The
 biggest ommission is probably the lack of further description of unproved
 propositions (precisely in TPTP syntax: fofs with role 'unknown' and
 mptp_info(_,_,proposition,_,_)). Most of them are probably natural deduction
 (ND) assumptions, but some are not (e.g. propositions about constants
 introduced by the 'consider' keyword - they can be rather thought of as
 definitions).

 It should not be difficult to add the missing info there - the TPTP format is
 produced directly from the XML, which contains all the proof structure. There
 have been several reasons for postponing this so far - my focus on getting
 the reproving of the simple steps right (which is sort of a precondition for
 ATP cross-verification of Mizar, which in turn is a precondition for
 productive use of deductive tools like ATP as part of larger knowledge-based
 systems tailored for Mizar), and also a focus on getting the reproving of
 theorems from their external references right, which in a sense gives you a
 large-scale proof structure (which is not only usable for learning, but also
 - unlike the ND internal proofs - understandable to ATPs, and thus allowing
 things like the MPTP challenge). Another reason was that I did not want to
 decide about the details of MPTP ND annotations, until I decided about the
 export of ND proof structure to TPTP. The latter accidentally happened last
 week (not only as a next step for the cross-verification, but also as a
 megalomanic plan to build the detailed MML DAG with milions of nodes :-). So
 some missing annotations will appear in the next few weeks (maybe even days),
 and more importantly, the Mizar ND proofs will become TPTP proofs (if needed
 without any ND - though there is a good chance that assumptions will become
 acknowledged and processed parts of TPTP proofs).

 To sum up:
 - you can have 'raw Mizar' loaded by using the XML - that gives you access to
 the formulas and Mizar proof structure; the disadvantage is that for any
 deductive tool which you might want to use, you'll have to define the
 translation to its logic
 - there is the 'raw MPTP', with formulas in extended TPTP syntax containing
 the mptp_info annotations; this is sort of a middle way between the XML and
 standard TPTP; the annotations are likely to get a bit better in near future,
 what they annotate is still the Mizar ND structure
 - there are pre-generated 'standard TPTP' problem sets in the MPTP distro and
 the MPTPChallenge distro; these you can feed to ATP systems, and also
 learning systems (the symbols and proposition names are stable there - always
 the same semantics); for quite a lot of them an ATP proof can be found (and
 used for learning)
 - there will (hopefully soon) be a full TPTP (i.e. mostly/completely non-ND)
 proof structure export, compatible with the proofs produced by ATP systems
 like EP.

 Josef

 On Sun, 18 Mar 2007, Ben Goertzel wrote:


 Josef,

 Thanks for your reply!  However, I'm not 

Re: [agi] BMI/BCI Growing Fast

2007-12-15 Thread Benjamin Goertzel
I would add that the Chinese universities are extremely eager to
recruit Western professors to lead research labs in AI and other
areas.

Hugo DeGaris relocated there a year or so ago, and is quite relieved
to be supplied with a bunch of excellent research assistants and loads
of computational firepower for his work on neural nets and FPGA's ...
he'd had a rough ride for a while, what with the bankruptcy of his
Belgian employer Starlab, and a 6-year stint at Utah State University
where he was unable to get significant US gov't research funding...

When I talked to various university administrators in Wuhan (where
Hugo is), it was quite clear that if I wanted to relocate there, I would have
access to an essentially unlimited number of research programmers to
help with my AI projects.  Without needing to constantly write grant
applications and compete for funds.

Novamente LLC is in an exciting phase right now though; and my
personal situation would make it difficult for me to relocate to China ... but
it's interesting to know that backup plan is there...

-- Ben G

On Dec 15, 2007 5:47 AM, Gary Miller [EMAIL PROTECTED] wrote:

 Ben said

  That is sarcasm ... however, it's also my serious hypothesis as to why
 the Chinese gov't doesn't mind losing their best  brightest... 

 It may also be that China understands too that as more Chinese become
 Americans, China will have a greater exposure and political lobby within the
 United States.

 Look at how much political influence Israel now exerts within the United
 States government and corporations.

 Also as with other minorities, the more exposure that Americans have to them
 in their everyday life the less fear and distrust that will be experienced.

 As the Chinese people which I know have entered into higher end professional
 roles in the United States they are eager to form business alliances with
 company's back home in China.

 China is also still feeling great population pressure.

 I just returned from meeting my fiancé there and in the cities where I
 stayed it still felt very overpopulated by my standards.

 Even though they possess excellent mass transit, people are packed in buses
 like sardines and more people move from the countryside to the city everyday
 to find work.

 I was only there for ten days so I did not gain a lot of understanding of
 how they manage to keep everything running.

 But in just that short time I saw that they have the same drug,
 homelessness, and poverty problems that we have here.

 The vast majority of people I met there were very friendly towards Americans
 and even though I know there have to be a lot of us there, because I was not
 in the tourist areas, I could go two or three days without seeing another
 American.

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=76427523-f0fb03

Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Benjamin Goertzel
Mike,

My comment is that this is GREAT research and development, but, for
the near and probably medium future is very likely to be about perception
and action rather than cognition.

I.e., we are sort of on the verge of understanding how to hook up new
sensors to the brain, and hook the brain up to new actuators.  We're not
there yet except in some pretty simple cases, but we're getting there.
And that's exciting!

But, we're still quite clueless about how to, say, hook the brain up to
a calculator or to Google in a useful way... due to having a vastly
insufficiently detailed knowledge of how the brain carries out
cognitive operations...

-- Ben G

On Dec 14, 2007 5:22 AM, Mike Tintner [EMAIL PROTECTED] wrote:
 [Comments?]

 Brain-computer link systems on the brink of breakthrough, study finds

 Systems that directly connect silicon circuits with brains are under
 intensive development all over the world, and are nearing commercial
 application in many areas, according to a study just placed online.
 Neurobiologist Theodore W. Berger of the University of Southern
 California chaired the eight-member committee which compiled the
 International Assessment of Research and Development in Brain-Computer
 Interfaces, published in October by the World Technology Evaluation
 Center, Inc., of Baltimore MD

 Berger, who holds the David Packard Chair at the USC Viterbi School of
 Engineering and is Director of the USC Center for Neural Engineering
 contributed the introduction and two chapters of the report, which
 encompassed dozens of research institutes in Europe and Asia.

 The other committee members (and chapter authors) included John K.
 Chapin (SUNY Downstate Medical Center); Greg A. Gerhardt (University of
 Kentucky); Dennis J. McFarland (Wadsworth Center); José C. Principe
 (University of Florida); Dawn M. Taylor (Case Western Reserve); and
 Patrick A. Tresco (University of Utah).

 The report contains three overall findings on Brain-Computer Interface
 (BCI) work worldwide:

 -- BCI research is extensive and rapidly growing, as is growth in the
 interfaces between multiple key scientific areas, including biomedical
 engineering, neuroscience, computer science, electrical and computer
 engineering, materials science and nanotechnology, and neurology and
 neurosurgery.

 -- BCI research is rapidly approaching first-generation medical
 practice-clinical trials of invasive BCI technologies and
 significant home use of noninvasive, electroencephalography (EEG-based)
 BCIs. The panel predicts that BCIs soon will markedly influence the
 medical device industry, and additionally BCI research will rapidly
 accelerate in non-medical arenas of commerce as well, particularly in
 the gaming, automotive, and robotics industries.

 -- The focus of BCI research throughout the world was decidedly uneven,
 with invasive BCIs almost exclusively centered in North America,
 noninvasive BCI systems evolving primarily from European and Asian
 efforts. BCI research in Asia, and particularly China, is accelerating,
 with advanced algorithm development for EEG-based systems currently a
 hallmark of China's BCI program. Future BCI research in China is clearly
 developing toward invasive BCI systems, so BCI researchers in the US
 will soon have a strong competitor.

 The chapters of the report offer detailed discussion of specific work
 from around the world, work on Sensor Technology, Biotic-Abiotic
 Interfaces, BMI/BCI Modeling and Signal Processing, Hardware
 Implementation, Functional Electrical Stimulation and Rehabilitation
 Applications of BCIs, Noninvasive Communication Systems, Cognitive and
 Emotional Neuroprostheses, and BCI issues arising out of research
 organization-funding, translation-commercialization, and education and
 training.

 With respect to translation and commercialization, the Committee found
 that BCI research in Europe and Japan was much more tightly tied to
 industry compared to what is seen in the U.S., with multiple high-level
 mechanisms for jointly funding academic and industrial partnerships
 dedicated to BCIs, and mechanisms for translational research that
 increased the probability of academic prototypes reaching industrial
 paths for commercialization.

 The report is now downloadable online at the WTEC website, at
 http://www.wtec.org/bci/BCI-finalreport-10Oct2007-lowres.pdf
 http://www.wtec.org/bci/BCI-finalreport-10Oct2007-lowres.pdf

 Source: University of Southern California
 http://www.physorg.com/news116764966.html
 http://www.physorg.com/news116764966.html


 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=76030919-edd895

Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Benjamin Goertzel
Mike wrote:
 Personally, my guess is that serious mindreading machines will be a reality
 in the not too distant future - before AGI and seriously autonomous mobile
 robots.


No way.

Tell that to the neuroscientists in your local university neuro lab, and they'll
get a good laugh ;-)

The future course of AGI is contentious and uncertain (though I have my own
strong opinions), but it seems very clear that mobile robotics tech is
more advanced
than mind-reading-type brain-imaging, and apt to progress far faster...

We have no f***-ing idea how to read thoughts from the brain right now
-- whereas
we do have a pretty well worked out theory of probabilistic robotics,
and are in a phase
of optimizing and tuning and generalizing and getting the details right...

What is the empirical grounds for your optimism?

Just to be clear: I am sure that mindreading technology is coming,
it's your relative
timing estimate that perplexes me...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=76052095-6b5f73


Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Benjamin Goertzel
Hi,

From Bob Mottram on the AGI list:
 However, I'm not expecting to see the widespread cyborgisation of
 human society any time soon.  As the article suggests the first
 generation implants are all devices to fulfill some well defined
 medical need, and will have to go through all the usual lengthy
 testing procedures before they're generally accepted.  Only after the
 initial medical phase which could last several decades will brain
 implants be sufficiently inexpensive and be considered sufficiently
 safe that people start to think about using these things as a
 lifestyle, work or leisure enhancement rather as cosmetic surgery is
 today.

Hmmm... it's interesting to speculate, though...

If it were possible to wire a calculator into the brain, this could dramatically
increase the effectiveness of certain kinds of work, right?

So, if a certain nation were to make laws allowing this, and to encourage
research into this, then potentially they could gain a dramatic advantage
over other nations...

There does therefore seem a possibility for a brain enhancement race
if a case is made to some national government that within say 10-20 years
effort a massively productivity-increasing brain-enhancement could be made.

This is not really an AGI topic, though, so I'm cross-posting to the Singularity
list and I think discussion should continue there if anyone feels like it.
(Though the topic may be sufficiently obvious not to need follow-up
discussion...)

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=76046209-29ce80


Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Benjamin Goertzel
 Bear in mind that science has used very little imagination here to date.
 Science only started studying consciousness ten years ago. It still hasn't
 started studying Thought - the actual contents of consciousness: the
 streams of thought inside people's heads. In both cases, the reason has been
 sheer prejudice and nothing to do with true science.

That is absurd -- the reason is that brain-scanning doesn't work very well;
and designing informative rigorous psych lab experiments to measure aspects
of cognition is really hard


 I'm confident that well within the next 10 years, science will a) recognize
 Thought as a vital area of study (with the same mushrooming of study that
 took place with Consciousness, if not larger) and will b) understand why
 Thought is so important - above all,  to improve human thinking.

What do you think the mushrooming of Cognitive Science during the last
decade has been?  Science does recognize thought as a critical area
of study.  It's just a difficult thing to study.

All right, I'm going to stop taking the bait of your absurd claims and
statements,
Mike (for a while at least) ...
it's tempting to fall into the trap of correcting silly statements that pour
into my Inbox, but it's burning too much of my time, even though I read and type
quite fast...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=76106993-9e1bad


Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Benjamin Goertzel
Mike:
 Making the general public smarter is not in the best interest of
 government, who wants to keep us fat dumb and (relatively) happy
 (read: distracted).

 If we're not making people smarter with currently available resources,
 why would we invest in research to discover expensive new technologies
 to make people smarter?  We need that money to invest in research for
 expensive new technologies to allow people to be lazier.


You are thinking mostly about the USA, it seems.

I was thinking mostly about the People's Republic of China.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=76069170-430555


Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Benjamin Goertzel

 Is China pushing its people into being smarter?  Are they giving
 incentives beyond the US-style capitalist reasons for being smart?


The incentive is that if you get smart enough, you may figure out a way
to get out of China ;-)

Thus, they let the top .01% out, so as to keep the rest of the top 1%
motivated by the hope of getting out.

Clever, huh?

ben g

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=76321943-06efd5


Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Benjamin Goertzel
That is sarcasm ... however, it's also my serious hypothesis as to why
the Chinese gov't doesn't mind losing their best  brightest...

-- Ben

On Dec 14, 2007 7:35 PM, Robin Gane-McCalla [EMAIL PROTECTED] wrote:
 Is that sarcasm or an official Communist Party platform?


 On Dec 14, 2007 3:48 PM, Benjamin Goertzel [EMAIL PROTECTED] wrote:
  
   Is China pushing its people into being smarter?  Are they giving
   incentives beyond the US-style capitalist reasons for being smart?
  
 
  The incentive is that if you get smart enough, you may figure out a way
  to get out of China ;-)
 
  Thus, they let the top .01% out, so as to keep the rest of the top 1%
  motivated by the hope of getting out.
 
  Clever, huh?
 
  ben g
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;
 



 --
 Robin Gane-McCalla
 YIM: Robin_Ganemccalla
 AIM: Robinganemccalla

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=76335876-20d29a


Re: [agi] The Function of Emotions is Torture

2007-12-12 Thread Benjamin Goertzel
Mike

In case you're curious I wrote down my theory of
emotions here

http://www.goertzel.org/dynapsyc/2004/Emotions.htm

(an early version of text that later became a chapter in The
Hidden Pattern)

Among the conclusions my theory of emotions leads to are, as stated there:

*
* AI systems clearly will have emotions
* Their emotions will include, at least, happiness and sadness and
spiritual joy
* Generally AI systems will probably experience less intense
emotions than humans, because they can have more robust virtual
multiverse modeling components, which are not so easily bollixed up –
so they'll less often have the experience of major
non-free-will-related mental-state shifts
* Experiencing less intense emotions does not imply experiencing
less intense states of consciousness.  Emotion is only one particular
species of state-of-consciousness.
* The specific emotions AI systems will experience will probably
be quite different from those of humans, and will quite possibly vary
widely among different AI systems
* If you put an AI in a human-like body with the same sorts of
needs as primordial humans, it would probably develop every similar
emotions to the human ones

*


-- Ben

On Dec 12, 2007 9:27 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 I don't think you've answered my point - which perhaps wasn't put well
 enough.

 All you propose, as far as I can see, is to apply *values* to behaviour - to
 apply positive and negative figures to behaviours considered beneficial or
 detrimental, and thus affect the system's further behaviour - reinforcing
 it, for example.

 It is more or less like a value approach to investing on stocks on the
 stockmarket - when their value goes up or down, a formula determines whether
 the system buys more or less shares.

 But this is a purely numerical approach to altering behaviour. There is
 nothing a priori wrong with it - although, in fact, (although this is a more
 complex argument which I won't really go into), it would never actually work
 for AGI, which has to deal with problems where it is impossible to apply
 precise or reliable values.

 But the important point here is that these *values* are not *emotions* at
 all. They're fundamentally different entities and  affect behaviour in
 fundamentally different ways - your values, for example, will not cause any
 pleasure or pain to a self, or have a corporeal, hormonal nature, or
 conflict. You, like others, are trying to invest your value system with a
 complexity and dignity that it simply hasn't got and has no right to. It's
 absurd - you might just as well talk of every plus or minus sign in a
 mathematical calculation as conferring pleasure or pain.

 It also shows a very limited understanding of emotions.



 Matt: Mike Tintner [EMAIL PROTECTED] wrote:

  Matt: I don't believe that the ability to feel pleasure and pain depends
  on
   consciousness.  That is just a circular definition.
   http://en.wikipedia.org/wiki/Philosophical_zombie
 
  Richard:It is not circular.  Consciousness and pleasure/pain are both
  subjective
  issues.  They can resolved together.
 
  Both of you, in fairly standard fashion, are approaching humans and
  animals
  as if they were dissected on a table with consciousness/ emotions/
  pleasure
   pain lying around.
 
  The reality is that we are integrated systems in which -
 
  a self
 
  is continually subjected to
 
  and feels  (or  to some extent may choose not to feel)
 
  emotions (involving pleasure/pain)
 
  via a (two-way) nervous system.
 
  The questions Matt has to answer is:
 
  1) are the systems you envisage going to have a self (to feel emotions) -
  and if so, why?

 No, I am proposing a measure of reinforcement for intelligence in general,
 whether human, animal, or machine, all of which fall under Legg and Hutter's
 universal intelligence ( http://www.vetta.org/documents/ui_benelearn.pdf ),
 which is based on Hutter's AIXI model (
 http://www.hutter1.net/ai/aixigentle.htm ).  In this model, an agent and an
 environment are modeled by a pair of interactive Turing machines exchanging
 symbols.  In addition, the environment sends a utility or reinforcement
 signal
 to the agent at each step.  The goal of the agent is to maximize the
 accumulated utility.  The paper on universal intelligence (UI) proposes
 defining intelligence as the expected accumulated utility for a randomly
 chosen environment (from a Solomonoff distribution of environments, i.e.
 self
 delimiting Turing machines chosen by coin flips).  Hutter's AIXI model shows
 that the most intelligent strategy is to guess at each step that the
 environment is simulated by the shortest program consistent with the
 observed
 interaction so far.  However, AIXI is not computable.

 In humans, it is natural to think of positive utility or reinforcement as a
 reward signal or pleasure, and negative utility as a penalty, such as
 pain.
 In this respect, humans seek to maximize expected 

Re: [agi] The same old explosions?

2007-12-11 Thread Benjamin Goertzel
Self-organizing complexity and computational complexity
are quite separate technical uses of the word complexity, though I
do think there
are subtle relationships.

As an example of a relationship btw the two kinds of complexity, look
at Crutchfield's
work on using formal languages to model the symbolic dynamics generated by
dynamical systems as they approach chaos.  He shows that as the parameter
values of a dynamical system approach those that induce a chaotic regime in
the system, the formal languages implicit in the symbolic-dynamics
representation
of the system's dynamics pass through more and more complex language classes.

And of course, recognizing a grammar in a more complex language class has
a higher computational complexity.

So, Crutchfield's work shows a connection btw self-organizing complexity and
computational complexity, via the medium of formal languages and symbolic
dynamics.

As another, more pertinent example, the Novamente design seeks to avoid
the combinatorial explosions implicit in each of its individual AI
learning/reasoning
components, via integrating these components together in an appropriate way.
This integration, via its impact on the overall system dynamics,
leads to a certain degree of complexity in the self-organizing-systems sense

-- Ben G

On Dec 11, 2007 10:09 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Mike Tintner wrote:
  Essentially, Richard  others are replaying the same old problems of
  computational explosions - see computational complexity in this
  history of cog. sci. review - no?

 No:  this is a misunderstanding of complexity unfortunately (cf the
 footnote on p1 of my AGIRI paper):  computational complexity refers to
 how computations scale up, which is not at all the same as the
 complexity issue, which is about whether or not a particular system
 can be explained.

 To see the difference, imagine an algorithm that was good enough to be
 intelligent, but scaling it up to the size necessary for human-level
 intelligence would require a computer the size of a galaxy.  Nothing
 wrong with the algorithm, and maybe with a quantum computer it would
 actually work.  This algorithm would be suffering from a computational
 complexity problem.

 By contrast, there might be proposed algorithms for iimplementing a
 human-level intelligence which will never work, no matter how much they
 are scaled up (indeed, they may actually deteriorate as they are scaled
 up).  If this was happening because the designers were not appreciating
 that they needed to make subtle and completely non-obvious changes in
 the algorithm, to get its high-level behavior to be what they wanted it
 to be, and if this were because intelligence requires
 complexity-generating processes inside the system, then this would be a
 complex systems problem.

 Two completely different issues.


 Richard Loosemore






  Mechanical Mind
  Gilbert Harman
  Mind as Machine: A History of Cognitive Science. Margaret A. Boden. Two
  volumes, xlviii + 1631 pp. Oxford University Press, 2006. $225.
 
  The term cognitive science, which gained currency in the last half of
  the 20th century, is used to refer to the study of cognition-cognitive
  structures and processes in the mind or brain, mostly in people rather
  than, say, rats or insects. Cognitive science in this sense has
  reflected a growing rejection of behaviorism in favor of the study of
  mind and human information processing. The field includes the study of
  thinking, perception, emotion, creativity, language, consciousness and
  learning. Sometimes it has involved writing (or at least thinking about)
  computer programs that attempt to model mental processes or that provide
  tools such as spreadsheets, theorem provers, mathematical-equation
  solvers and engines for searching the Web. The programs might involve
  rules of inference or productions, mental models, connectionist
  neural networks or other sorts of parallel constraint satisfaction
  approaches. Cognitive science so understood includes cognitive
  neuroscience, artificial intelligence (AI), robotics and artificial
  life; conceptual, linguistic and moral development; and learning in
  humans, other animals and machines.
 
  click for full image and caption
 
 
  Among those sometimes identifying themselves as cognitive scientists are
  philosophers, computer scientists, psychologists, linguists, engineers,
  biologists, medical researchers and mathematicians. Some individual
  contributors to the field have had expertise in several of these more
  traditional disciplines. An excellent example is the philosopher,
  psychologist and computer scientist Margaret Boden, who founded the
  School of Cognitive and Computing Sciences at the University of Sussex
  and is the author of a number of books, including Artificial
  Intelligence and Natural Man (1977) and The Creative Mind (1990). Boden
  has been active in cognitive science pretty much from the start and has
  known many of 

Re: [agi] AGI communities and support

2007-12-08 Thread Benjamin Goertzel
 Thanks Bob. But I meant, it looks more likely that robots will achieve - and
 have already taken the first concrete steps to achieve - the goals of AGI -
 the capacity to learn a range of abilities and activities.

Can you point to any single robot that has demonstrated the capability to
learn a range of abilities and activities?

I don't think you can.

It seems to me that all you're saying is, basically, that robots have been used
to do a lot of different things.

So have software programs.

But, a collection of highly specialized intelligences does not comprise
a general intelligence.  It may be (one way to make) a significant part of
a general intelligence architecture, but if so, I would argue the hard part
is being left out...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73930833-e36fbf


Re: [agi] AGI communities and support

2007-12-08 Thread Benjamin Goertzel
 Yes I expect to see more narrow AI robotics in future, but as time
 goes on there will be pressures to consolidate multiple abilities into
 a single machine.  Ergonomics dictates that people will only accept a
 limited number of mobile robots in their homes or work spaces.
 Physical space is at a premium, and you don't want multiple devices
 getting in your way all the time.  In a similar manner it's
 inconvenient to have to carry around multiple electronic gadgets -
 really you just want a single gadget which does lots of things.

My prediction is that, when robotics firms finally get around to working on
multi-purpose robots that need to carry out multiple intersecting, interacting
activities in messy real-world environments -- THEN they will all of a sudden
become intensively interested in work on cognitive architecture and AGI.
Work that is of very limited interest to them now since they are working on
robots with very specialized functionalities.

I don't think that a robot assembling parts in a factory has any more to do with
AGI than Google's statistical NLP engine does -- probably less.  Embodiment
may be a very useful ingredient for AGI systems, but, it's quite possible to do
narrow robotics and that is what nearly all roboticists are doing,
for the same
practical reasons that nearly all AI software researchers are doing narrow AI.

I agree that the DARPA challenge robots are interesting and are pushing more
in the AGI direction than nearly any commercial robotics.

It's worthy of note that the most interesting robotics work, from an
AGI view, seems to
be either

-- gov't funded research stuff like the DARPA challenge (most entrants
are universities
whose research projects live off gov't funding)

-- blue-sky RD projects by big companies with money to burn, such as Honda's
Asimo project

I.e. the AGI meets robotics meme is VERY far from affecting the
commercial robotics
industry, it would seem.  Which is a shame.

I would like to break past this barrier by taking Webkinz as an
inspiration, and making
intelligent robotic toys that do most of their thinking on a remote
server farm (and interoperate
with intelligent agents in virtual worlds).  But this is going to be a
number of years in coming,
I'm sure  The cost of really good robotics equipment remains high
(from the perspective
of a commercially viable robot toy), though decreasing rapidly.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73963833-0c76f0


Re: [agi] AGI communities and support

2007-12-08 Thread Benjamin Goertzel
 So I reckon roboticists ARE actually focussed on an AGI challenge - whereas,
 as I've pointed out before, there is nothing comparable in pure AGI.

To my knowledge, none of the work on the ICRA Robotic Challenge is at
this point taking a strong AGI approach

And
 with all those millions of investment bucks - I expect to see some results/
 genuine progress in the not too distant future.

Millions of investment bucks doesn't go that far in robotics, unfortunately.

I hope to see progress too, but I believe you're way optimistic about
the current
state of robotics research.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73975970-c08d44


Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Benjamin Goertzel
On Dec 7, 2007 7:09 AM, Mike Tintner [EMAIL PROTECTED] wrote:


  Matt,:AGI research needs
  special hardware with massive computational capabilities.
 

 Could you give an example or two of the kind of problems that your AGI
 system(s) will need such massive capabilities to solve? It's so good - in
 fact, I would argue, essential - to ground these discussions.

Problems that would likely go beyond the capability of a current PC to solve
in a realistic amount of time, in the
current NM architecture, would include for instance:

-- Learning a new type of linguistic relationship (in the context of
link grammar, this would mean e.g. learning a new grammatical link type)

-- Learning a new truth value formula for a probabilistic inference rule

-- Recognizing objects in a complex, rapidly-changing visual scene

(Not that we have written the code to let the system solve these particular
problems yet ... but the architecture should allow it...)

I don't think we need more than hundreds of PCs to deal with these things,
but we need more than a current PC, according to the behavior of our
current algorithms.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73544012-c56a06


Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Benjamin Goertzel
On Dec 7, 2007 10:21 AM, Bob Mottram [EMAIL PROTECTED] wrote:
 If I had 100 of the highest specification PCs on my desktop today (and
 it would be a big desk!) linked via a high speed network this wouldn't
 help me all that much.  Provided that I had the right knowledge I
 think I could produce a proof of concept type AGI on a single PC
 today, even if it ran like a tortoise.  It's the knowledge which is
 mainly lacking I think.

I agree that at the moment hardware is NOT the bottleneck.

This is why, while we've instrumented the Novamente system to
be straightforwardly extensible to a distributed implementation, we
haven't done much actual distributed processing implementation yet.

We have build commercial systems incorporating the NCE in simple
distributed architectures, but haven't gone the distributed-AGI direction
yet in practice -- because, as you say, it seems likely that the key
AGI problems can be
worked out on a single machine, and you can then scale up afterwards.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73609156-15fdf3


Re: [agi] None of you seem to be able ...

2007-12-07 Thread Benjamin Goertzel
On Dec 6, 2007 8:06 PM, Ed Porter [EMAIL PROTECTED] wrote:
 Ben,

 To the extent it is not proprietary, could you please list some of the types
 of parameters that have to be tuned, and the types, if any, of
 Loosemore-type complexity problems you envision in Novamente or have
 experienced with WebMind, in such tuning and elsewhere?

 Ed Porter

A specific list of parameters would have no meaning without a huge
explanation which I don't have time to give...

Instead I'll list a few random areas where choices need to be made, that appear
localized at first but wind up affecting the whole

-- attention allocation is handled by an artificial economy mechanism, which
has the same sorts of parameters as any economic system (analogues of
interest rates,
rent rates, etc.)

-- program trees representing internal procedures are normalized via a set of
normalization rules, which collectively cast procedures into a certain
normal form.
There are many ways to do this.

-- the pruning of (backward and forward chaining) inference trees uses a
statistical bandit problem methodology, which requires a priori probabilities
to be ascribed to various inference steps


Fortunately though in each of the above three
examples there is theory that can guide parameter tning (different theories
in the three cases -- dynamic systems theory for the artificial economy; formal
computer science and language theory for program tree reduction; and Bayesian
stats for the pruning issue)

Webmind AI Engine had too many parameters and too much coupling between
subsystems.  We cast parameter optimization as an AI learning problem but it
was a hard one, though we did make headway on it.  Novamente Engine has much
coupling btw subsystems, but no unnecessary coupling; and many fewer
parameters on which system behavior can sensitively depend.  Definitely,
minimization of the number of needful-of-adjustment parameters is a very key
aspect of AGI system design.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73598324-4bf78b


Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Benjamin Goertzel
 Clearly the brain works VASTLY differently and more efficiently than current
 computers - are you seriously disputing that?

It is very clear that in many respects the brain is much less efficient than
current digital computers and software.

It is more energy-efficient by and large, as Read Montague has argued ...
but OTOH sometimes it is wy less algorithmically efficient

For instance, in spite of its generally high energy efficiency, my brain wastes
a lot more energy calculating 969695775755/ 8884 than my computer
does.

And e.g. visual cortex, while energy-efficient, is horribly algorithmically
inefficient, involving e.g. masses of highly erroneous motion-sensing neurons
whose results are averaged together to give reasonably accurate values..

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73893310-401039


Re: [agi] None of you seem to be able ...

2007-12-06 Thread Benjamin Goertzel
On Dec 5, 2007 6:23 PM, Mike Tintner [EMAIL PROTECTED] wrote:

 Ben:  To publish your ideas
  in academic journals, you need to ground them in the existing research
  literature,
  not in your own personal introspective observations.

 Big mistake. Think what would have happened if Freud had omitted the 40-odd
 examples of slips in The Psychopathology of Everyday Life (if I've got the
 right book!)

Obviously, Freud's reliance on introspection and qualitative experience had
plusses and minuses.  He generated a lot of nonsense as well as some
brilliant ideas.

But anyway, I was talking about style of exposition, not methodology of
doing work.  If Freud were a professor today, he would write in a different
style in order to get journal publications; though he might still write some
books in a more expository style as well.

I was pointing out that, due to the style of exposition required in contemporary
academic culture, one can easily get a false impression that no one in academia
is doing original thinking -- but the truth is that, even if you DO
original thinking,
you are required in writing your ideas up for publication to give them
the appearance
of minimal originality via grounding them exorbitantly in the prior
literature (even if in fact
their conception had nothing, or very little, to do with the prior
literature).  I'm not
saying I like this -- I'm just describing the reality.  Also, in the
psych literature, grounding
an idea in your own personal observations is not acceptable and is not
going to get
you published -- unless of course you're a clinical psychologist,
which I am not.

The scientific heavyweights are the people who are heavily
 grounded. The big difference between Darwin and Wallace is all those
 examples/research, and not the creative idea.

That is an unwarranted overgeneralization.

Anyway YOU were the one who was harping on the lack of creativity in AGI.

Now you've changed your tune and are harping on the lack of {creativity coupled
with a lot of empirical research}

Ever consider that this research is going on RIGHT NOW?  I don't know why you
think it should be instantaneous.  A number of us are doing concrete
research work
aimed at investigating our creative ideas about AGI.  Research is
hard.  It takes
time.  Darwin's research took time.  The Manhattan Project took time.  etc.

 And what I didn't explain in my simple, but I believe important, two-stage
 theory of creative development is that there's an immense psychological
 resistance to moving onto the second stage. You have enough psychoanalytical
 understanding, I think, to realise that the unusual length of your reply to
 me may possibly be a reflection of that resistance and an inner conflict.

What is bizarre to me, in this psychoanalysis of Ben Goertzel that you present,
is that you overlook
the fact that I am spending most of my time on concrete software projects, not
on abstract psychological/philosophical theory
Including the Novamente Cognition Engine
project which is aimed precisely at taking some of my creative ideas about AGI
and realizing them in useful software

As it happens, my own taste IS more for theory, math and creative arts than
software development -- but, I decided some time ago that the most IMPORTANT
thing I could do would be to focus a lot of attention on
implementation and detailed
design rather than just generating more and more funky ideas.  It is
always tempting to me to
consider my role as being purely that of a thinker, and leave all
practical issues to others
who like that sort of thing better -- but I consider the creation of
AGI *so* important
that I've been willing to devote the bulk of my time to activities
that run against my
personal taste and inclination, for some years now  And
fortunately I have found
some great software engineers as collaborators.


 P.S. Just recalling a further difference between the original and the
 creative thinker - the creative one has greater *complexes* of ideas - it
 usually doesn't take just one idea to produce major creative work, as people
 often think, but a whole interdependent network of them. That, too, is v.
 hard.

Mike, you can make a lot of valid criticisms against me, but I don't
think you can
claim I have not originated an interdependent network of creative ideas.
I certainly have done so.  You may not like or believe my various ideas, but
for sure they form an interdependent network.  Read The Hidden Pattern
for evidence.

-- Ben Goertzel

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73146505-9fe3b7


Re: [agi] None of you seem to be able ...

2007-12-06 Thread Benjamin Goertzel
There is no doubt that complexity, in the sense typically used in
dynamical-systems-theory, presents a major issue for AGI systems.  Any
AGI system with real potential is bound to have a lot of parameters
with complex interdependencies between them, and tuning these
parameters is going to be a major problem.  The question is whether
one has an adequate theory of one's system to allow one to do this
without an intractable amount of trial and error.  Loosemore -- if I
interpret him correctly -- seems to be suggesting that for powerful
AGI systems no such theory can exist, on principle.  I doubt very much
this is correct.

-- Ben G

On Dec 6, 2007 9:40 AM, Ed Porter [EMAIL PROTECTED] wrote:
 Jean-Paul,

 Although complexity is one of the areas associated with AI where I have less
 knowledge than many on the list, I was aware of the general distinction you
 are making.

 What I was pointing out in my email to Richard Loosemore what that the
 definitions in his paper Complex Systems, Artificial Intelligence and
 Theoretical Psychology, for irreducible computability and global-local
 interconnect themselves are not totally clear about this distinction, and
 as a result, when Richard says that those two issues are an unavoidable part
 of AGI design that must be much more deeply understood before AGI can
 advance, by the more loose definitions which would cover the types of
 complexity involved in large matrix calculations and the design of a massive
 supercomputer, of course those issues would arise in AGI design, but its no
 big deal because we have a long history of dealing with them.

 But in my email to Richard I said I was assuming he was not using this more
 loose definitions of these words, because if he were, they would not present
 the unexpected difficulties of the type he has been predicting.  I said I
 though he was dealing with more the potentially unruly type of complexity, I
 assume you were talking about.

 I am aware of that type of complexity being a potential problem, but I have
 designed my system to hopefully control it.  A modern-day well functioning
 economy is complex (people at the Santa Fe Institute often cite economies as
 examples of complex systems), but it is often amazingly unchaotic
 considering how loosely it is organized and how many individual entities it
 has in it, and how many transitions it is constantly undergoing.  Unsually,
 unless something bangs on it hard (such as having the price of a major
 commodity all of a sudden triple), it has a fair amount of stability, while
 constantly creating new winners and losers (which is a productive form of
 mini-chaos).  Of course in the absence of regulation it is naturally prone
 to boom and bust cycles.

 So the system would need regulation.

 Most of my system operates on a message passing system with little concern
 for synchronization, it does not require low latencies, most of its units,
 operate under fairly similar code.  But hopefully when you get it all
 working together it will be fairly dynamic, but that dynamism with be under
 multiple controls.

 I think we are going to have to get such systems up and running to find you
 just how hard or easy they will be to control, which I acknowledged in my
 email to Richard.  I think that once we do we will be in a much better
 position to think about what is needed to control them.  I believe such
 control will be one of the major intellectual challenges to getting AGI to
 function at a human-level.  This issue is not only preventing runaway
 conditions, it is optimizing the intelligence of the inferencing, which I
 think will be even more import and diffiducle.  (There are all sorts of
 damping mechanisms and selective biasing mechanism that should be able to
 prevent many types of chaotic behaviors.)  But I am quite confident with
 multiple teams working on it, these control problems could be largely
 overcome in several years, with the systems themselves doing most of the
 learning.

 Even a little OpenCog AGI on a PC, could be interesting first indication of
 the extent to which complexity will present control problems.  As I said if
 you had 3G of ram for representation, that should allow about 50 million
 atoms.  Over time you would probably end up with at least hundreds of
 thousand of complex patterns, and it would be interesting to see how easy it
 would be to properly control them, and get them to work together as a
 properly functioning thought economy in what ever small interactive world
 they developed their self-organizing pattern base.  Of course on such a PC
 based system you would only, on average, be able to do about 10million
 pattern to pattern activations a second, so you would be talking about a
 fairly trivial system, but with say 100K patterns, it would be a good first
 indication of how easy or hard agi systems will be to control.

 Ed Porter

 -Original Message-
 From: Jean-Paul Van Belle [mailto:[EMAIL PROTECTED]
 Sent: Thursday, December 06, 

Re: [agi] None of you seem to be able ...

2007-12-06 Thread Benjamin Goertzel
 Conclusion:  there is a danger that the complexity that even Ben agrees
 must be present in AGI systems will have a significant impact on our
 efforts to build them.  But the only response to this danger at the
 moment is the bare statement made by people like Ben that I do not
 think that the danger is significant.  No reason given, no explicit
 attack on any component of the argument I have given, only a statement
 of intuition, even though I have argued that intuition cannot in
 principle be a trustworthy guide here.

But Richard, your argument ALSO depends on intuitions ...

I'll try, though, to more concisely frame the reason I think your argument
is wrong.

I agree that AGI systems contain a lot of complexity in the dynamical-
systems-theory sense.

And I agree that tuning all the parameters of an AGI system externally
is likely to be intractable, due to this complexity.

However, part of the key to intelligence is **self-tuning**.

I believe that if an AGI system is built the right way, it can effectively
tune its own parameters, hence adaptively managing its own complexity.

Now you may say there's a problem here: If AGI component A2 is to
tune the parameters of AGI component A1, and A1 is complex, then
A2 has got to also be complex ... and who's gonna tune its parameters?

So the answer has got to be that: To effectively tune the parameters
of an AGI component of complexity X, requires an AGI component of
complexity a bit less than X.  Then one can build a self-tuning AGI system,
if one does the job right.

Now, I'm not saying that Novamente (for instance) is explicitly built
according to this architecture: it doesn't have N components wherein
component A_N tunes the parameters of component A_(N+1).

But in many ways, throughout the architecture, it relies on this sort of
fundamental logic.

Obviously it is not the case that every system of complexity X can
be parameter-tuned by a system of complexity less than X.  The question
however is whether an AGI system can be built of such components.
I suggest the answer is yes -- and furthermore suggest that this is
pretty much the ONLY way to do it...

Your intuition is that this is not possible, but you don't have a proof
of this...

And yes, I realize the above argument of mine is conceptual only -- I haven't
given a formal definition of complexity.  There are many, but that would
lead into a mess of math that I don't have time to deal with right now,
in the context of answering an email...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73243865-194e0e


Re: Last word for the time being [WAS Re: [agi] None of you seem to be able ...]

2007-12-06 Thread Benjamin Goertzel
 Show me ONE other example of the reverse engineering of a system in
 which the low level mechanisms show as many complexity-generating
 characteristics as are found in the case of intelligent systems, and I
 will gladly learn from the experience of the team that did the job.

 I do not believe you can name a single one.

Well, I am not trying to reverse engineer the brain.  Any more than
the Wright Brothers were trying to reverse engineer  a bird -- though I
do imagine the latter will eventually be possible.

 You know, I sympathize with you in a way.  You are trying to build an
 AGI system using a methodology that you are completely committed to.
 And here am I coming along like Bertrand Russell writing his letter to
 Frege, just as poor Frege was about to publish his Grundgesetze der
 Arithmetik, pointing out that everything in the new book was undermined
 by a paradox.  How else can you respond except by denying the idea as
 vigorously as possible?

It's a deeply flawed analogy.

Russell's paradox is a piece of math and once Frege
was confronted with it he got it.  The discussion between the two of them
did not devolve into long, rambling dialogues about the meanings of terms
and the uncertainties of various intuitions.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73249230-63bddf


Re: [agi] None of you seem to be able ...

2007-12-05 Thread Benjamin Goertzel
Tintner wrote:
 Your paper represents almost a literal application of the idea that
 creativity is ingenious/lateral. Hey it's no trick to be just
 ingenious/lateral or fantastic.

Ah ... before creativity was what was lacking.  But now you're shifting
arguments and it's something else that is lacking ;-)


 You clearly like producing new psychological ideas - from a skimming of your
 work, you've produced several. However, I didn't come across a single one
 that was grounded or where any attempt was made to ground them in direct,
 fresh observation (as opposed to occasionally referring to an existing
 scientific paper).

That is a very strange statement.

In fact nearly all my psychological ideas
are grounded in direct, fresh **introspective** observation ---
but they're not written up that way
because that's not the convention in modern academia.  To publish your ideas
in academic journals, you need to ground them in the existing research
literature,
not in your own personal introspective observations.

It is true that few of my psychological hypotheses are grounded in my own novel
lab experiments, though.  I did a little psych lab work in the late
90's, in the domain of
perceptual illusions -- but the truth is that psych and neuroscience
are not currently
sophisticated enough to allow empirical investigation of really
interesting questions about
the nature of cognition, self, etc.  Wait a couple decades, I guess.

In terms of creative psychology, that is consistent with
 your resistance to producing prototypes - and grounding your
 invention/innovation.

Well, I don't have any psychological resistance to producing working
software, obviously.

Most of my practical software work has been proprietary for customers; but,
check out MOSES and OpenBiomind on Google Code -- two open-source projects that
have emerged from my Novamente LLC and Biomind LLC work ...

It just happens that AGI does not lend itself to prototyping, for
reasons I've already tried
and failed to explain to you

We're gonna launch trainable, adaptive virtual animals in Second Life sometime
in 2008  But I won't consider them real prototypes of Novamente
AGI, even though in
fact they will use several aspects of the Novamente Cognition Engine
software.  They
won't embody the key emergent structures/dynamics that I believe need
to be there to have
human-level cognition -- and there is no simple prototype system that
will do so.

You celebrate Jeff Hawkins' prototype systems, but have you tried
them?  He's built
(or, rather Dileep George has built)
an image classification engine, not much different in performance from
many others out there.
It's nice work but it's not really an AGI prototype, it's an image classifiers.
He may be sort-of labeling it a prototype of his AGI approach -- but
really, it doesn't prove anything
dramatic about his AGI approach.  No one who inspected his code and
ran it would think that it
did provide such proof.

 There are at least two stages of creative psychological development - which
 you won't find in any literature. The first I'd call simply original
 thinking, the second is truly creative thinking. The first stage is when
 people realise they too can have new ideas and get hooked on the excitement
 of producing them. Only much later comes the second stage, when thinkers
 realise that truly creative ideas have to be grounded. Arguably, the great
 majority of people who may officially be labelled as creatives, never get
 beyond the first stage - you can make a living doing just that. But the most
 beautiful and valuable ideas come from being repeatedly refined against the
 evidence. People resist this stage because it does indeed mean a lot of
 extra work , but it's worth it.  (And it also means developing that inner
 faculty which calls for actual evidence).

OK, now you're making a very different critique than what you started
with though.

Before you were claiming there are no creative ideas in AGI.

Now, when confronted with creative ideas, you're complaining that they're not
grounded via experimental validation.

Well, yeah...

And the problem is that if one's creative ideas pertain to the
dynamics of large-scale,
complex software systems, then it takes either a lot of time or a lot
of money to achieve
this validation that you mention.

It is not the case that I (and other AGI researchers) are somehow
psychologically
undesirous of seeing our creative ideas explored via experiment.  It
is, rather, the case
that doing the relevant experiments requires a LOT OF WORK, and we are
few in number
with relatively scant resources.

What I am working toward, with Novamente and soon with OpenCog as
well, is precisely
the empirical exploration of the various creative ideas of myself,
others whose work has
been built on in the Novamente design, and my colleagues...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:

Re: [agi] None of you seem to be able ...

2007-12-04 Thread Benjamin Goertzel
 More generally, I don't perceive any readiness to recognize that  the brain
 has the answers to all the many unsolved problems of AGI  -

Obviously the brain contains answers to many of the unsolved problems of
AGI (not all -- e.g. not the problem of how to create a stable goal system
under recursive self-improvement).   However, current neuroscience does
NOT contain these answers.

And neither you nor anyone else has ever made a cogent argument that
emulating the brain is the ONLY route to creating powerful AGI.  The closest
thing to such an argument that I've seen
was given by Eric Baum in his book What Is
Thought?, and I note that Eric has backed away somewhat from that
position lately.

 I think it
 should be obvious that AGI isn't going to happen - and none of the unsolved
 problems are going to be solved - without major creative leaps. Just look
 even at the ipod  iphone -  major new technology never happens without such
 leaps.

The above sentence is rather hilarious to me.

If the Ipod and Iphone are your measure for creative leaps then
there have been
loads and loads of major creative leaps in AGI and narrow-AI research.

Anyway it seems to me that you're not just looking for creative leaps,
you're looking
for creative leaps that match your personal intuition.  Perhaps the
real problem is that
your personal intuition about intelligence is largely off-base ;-)

As an example of a creative leap (that is speculative and may be wrong, but is
certainly creative), check out my hypothesis of emergent social-psychological
intelligence as related to mirror neurons and octonion algebras:

http://www.goertzel.org/dynapsyc/2007/mirrorself.pdf

I happen to think the real subtlety of intelligence happens on the
emergent level,
and not on the level of the particulars of the system that gives rise
to the emergent
phenomena.  That paper conjectures some example phenomena that I believe
occur on the emergent level of intelligent systems.

Loosemore agrees with me on the importance of emergence, but he feels
there is a fundamental
irreducibility that makes it pragmatically impossible to figure out
via science, math
and intuition which concrete structures/dynamics will give rise to the
right emergent
structures, without doing a massive body of simulation experiments.  I
think he overstates
the degree of irreducibility.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=72114408-ae9503


Re: [agi] None of you seem to be able ...

2007-12-04 Thread Benjamin Goertzel
On Dec 4, 2007 8:38 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Benjamin Goertzel wrote:
 [snip]
  And neither you nor anyone else has ever made a cogent argument that
  emulating the brain is the ONLY route to creating powerful AGI.  The closest
  thing to such an argument that I've seen
  was given by Eric Baum in his book What Is
  Thought?, and I note that Eric has backed away somewhat from that
  position lately.

 This is a pretty outrageous statement to make, given that you know full
 well that I have done exactly that.

 You may not agree with the argument, but that is not the same as
 asserting that the argument does not exist.

 Unless you were meaning emulating the brain in the sense of emulating
 it ONLY at the low level of neural wiring, which I do not advocate.

I don't find your nor Eric's nor anyone else's argument that brain-emulation
is the golden path very strongly convincing...

However, I found Eric's argument by reference to the compressed nature of
the genome, more convincing than your argument via the hypothesis of
irreducible emergent complexity...

Sorry if my choice of words was not adequately politic.  I find your argument
interesting, but it's certainly just as speculative as the various AGI theories
you dismiss  It basically rests on a big assumption, which is that the
complexity of human intelligence is analytically irreducible within pragmatic
computational constraints.  In this sense it's less an argument than a
conjectural
assertion, albeit an admirably bold one.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=72126612-7f96e4


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Benjamin Goertzel
 Thus: building a NL parser, no matter how good it is, is of no use
 whatsoever unless it can be shown to emerge from (or at least fit with)
 a learning mechanism that allows the system itself to generate its own
 understanding (or, at least, acquisition) of grammar IN THE CONTEXT OF A
 MECHANISM THAT ALSO ACCOMPLISHES REAL UNDERSTANDING. When that larger
 issue is dealt with, a NL parser will arise naturally, and any previous
 work on non-developmental, hand-built parsers will be completely
 discarded. You were trumpeting the importance of work that I know will
 be thrown away later, and in the mean time will be of no help in
 resolving the important issues.

Richard, you discount the possibility that said NL parser will play a key
role in the adaptive emergence of a system that can generate its own
linguistic understanding.  I.e., you discount the possibility that, with the
right learning mechanism and instructional environment, hand-coded
rules may serve as part of the initial seed for a learning process that will
eventually generate knowledge obsoleting these initial hand-coded
rules.

It's fine that you discount this possibility -- I just want to point out that
in doing so, you are making a bold and unsupported theoretical hypothesis,
rather than stating an obvious or demonstrated fact.

Vaguely similarly, the grammar of child language is largely thrown
away in adulthood, yet it was useful as scaffolding in leading to the
emergence of adult language.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=72129171-2bf67a


Re: [agi] None of you seem to be able ...

2007-12-04 Thread Benjamin Goertzel
Richard,

Well, I'm really sorry to have offended you so much, but you seem to be
a mighty easy guy to offend!

I know I can be pretty offensive at times; but this time, I wasn't
even trying ;-)

 The argument I presented was not a conjectural assertion, it made the
 following coherent case:

1) There is a high prima facie *risk* that intelligence involves a
 significant amount of irreducibility (some of the most crucial
 characteristics of a complete intelligence would, in any other system,
 cause the behavior to show a global-local disconnect), and

The above statement contains two fuzzy terms -- high and significant ...

You have provided no evidence for any particular quantification of
these terms...
your evidence is qualitative/intuitive, so far as I can tell...

Your quantification of these terms seems to me a conjectural assertion
unsupported by evidence.

2) Because of the unique and unusual nature of complexity there is
 only a vanishingly small chance that we will be able to find a way to
 assess the exact degree of risk involved, and

3) (A corollary of (2)) If the problem were real, but we were to
 ignore this risk and simply continue with an engineering approach
 (pretending that complexity is insignificant),

The engineering approach does not pretend that complexity is
insignificant.  It just denies that the complexity of intelligent systems
leads to the sort of irreducibility you suggest it does.

Some complex systems can be reverse-engineered in their general
principles even if not in detail.  And that is all one would need to do
in order to create a brain emulation (not that this is what I'm trying
to do) --- assuming one's goal was not to exactly emulate some
specific human brain based on observing the behaviors it generates,
but merely to emulate the brainlike character of the system...

 then the *only* evidence
 we would ever get that irreducibility was preventing us from building a
 complete intelligence would be the fact that we would simply run around
 in circles all the time, wondering why, when we put large systems
 together, they didn't quite make it, and

No.  Experimenting with AI systems could lead to evidence that would
support the irreducibility hypothesis more directly than that.  I doubt they
will but it's possible.  For instance, we might discover that creating more and
more intelligent systems inevitably presents more and more complex
parameter-tuning problems, so that parameter-tuning appears to be the
bottleneck.  This would suggest that some kind of highly expensive evolutionary
or ensemble approach as you're suggesting might be necessary.

4) Therefore we need to adopt a Precautionary Principle and treat
 the problem as if irreducibility really is significant.


 Whether you like it or not - whether you've got too much invested in the
 contrary point of view to admit it, or not - this is a perfectly valid
 and coherent argument, and your attempt to try to push it into some
 lesser realm of a conjectural assertion is profoundly insulting.

The form of the argument is coherent and valid; but the premises involve
fuzzy quantifiers whose values you are apparently setting by
intuition, and whose
specific values sensitively impact the truth value of the conclusion.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=72135696-ff196d


[agi] Re: A global approach to AI in virtual, artificial and real worlds

2007-12-04 Thread Benjamin Goertzel
 What makes anyone think OpenCog will be different?  Is it more
 understandable?  Will there be long-term aficionados who write
 books on how to build systems in OpenCog?  Will the developers
 have experience, or just adolescent enthusiasm?  I'm watching
 the experiment to find out.

Well, OpenCog has more than one possible development avenue
associated with it.

On the one hand, I have some quite specific AGI design ideas which
I intend to publish next year (major aspects of the Novamente AGI
design), which are suited to be implemented within OpenCog.
As I believe these ideas are capable to lead to the development
of AGI at the human-level and beyond (though there are many
moderate-sized research problems that must be solved along the way,
and yes I realize the possibility that one of these blows up and
becomes a show-stopper, but I'm betting that won't happen and
I've certainly thought about it a lot...) ... thus I believe OpenCog
has big potential in this regard, if folks choose to develop it in that
way.

On the other hand OpenCog may also be quite valuable as
a platform for the development of other folks' AGI ideas, potentially
ones quite different from my own.  I don't know what will develop
and neither do any of us, I would suppose...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=72154694-0749bf


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Benjamin Goertzel
OK, understood...

On Dec 4, 2007 9:32 PM, Richard Loosemore [EMAIL PROTECTED] wrote:

 Benjamin Goertzel wrote:
  Thus: building a NL parser, no matter how good it is, is of no use
  whatsoever unless it can be shown to emerge from (or at least fit with)
  a learning mechanism that allows the system itself to generate its own
  understanding (or, at least, acquisition) of grammar IN THE CONTEXT OF A
  MECHANISM THAT ALSO ACCOMPLISHES REAL UNDERSTANDING. When that larger
  issue is dealt with, a NL parser will arise naturally, and any previous
  work on non-developmental, hand-built parsers will be completely
  discarded. You were trumpeting the importance of work that I know will
  be thrown away later, and in the mean time will be of no help in
  resolving the important issues.
 
  Richard, you discount the possibility that said NL parser will play a key
  role in the adaptive emergence of a system that can generate its own
  linguistic understanding.  I.e., you discount the possibility that, with the
  right learning mechanism and instructional environment, hand-coded
  rules may serve as part of the initial seed for a learning process that will
  eventually generate knowledge obsoleting these initial hand-coded
  rules.
 
  It's fine that you discount this possibility -- I just want to point out 
  that
  in doing so, you are making a bold and unsupported theoretical hypothesis,
  rather than stating an obvious or demonstrated fact.
 
  Vaguely similarly, the grammar of child language is largely thrown
  away in adulthood, yet it was useful as scaffolding in leading to the
  emergence of adult language.

 The problem is that this discussion has drifted away from the original
 context in which I made the remarks.

 I do *not* discount the possibility that an ordinary NL parser may play
 a role in the future.

 What I was attacking was the idea that a NL parser that does a wonderful
 job today (but which is built on a formalism that ignores all the issues
 involved in getting an adaptive language-understanding system working)
 is IPSO FACTO going to be a valuable step in the direction of a full
 adaptive system.

 It was the linkage that I dismissed.  It was the idea that BECAUSE the
 NL parser did such a great job, therefore it has a very high probability
 of being a great step on the road to a full adaptive (etc) language
 understanding system.

 If the NL parser completely ignores those larger issues I am justified
 in saying that it is a complete crap shoot whether or not this
 particular parser is going to be of use in future, more complete
 theories of language.

 But that is not the same thing as making a blanket dismissal of all
 parsers, saying they cannot be of any use as (as you point out) seed
 material in the design of a complete system.

 I was objecting to Ed's pushing this particular NL parser in my face and
 insisting that I should respect it as a substantial step towards full
 AGI   .   and my objection was that I find models like that all show
 and no deep substance precisely because they ignore the larger issues
 and go for the short-term gratification of a parser that works really well.

 So I was not taking the position you thought I was.




 Richard Loosemore





 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=72155184-923590


Re: [agi] Funding AGI research

2007-11-30 Thread Benjamin Goertzel
On Nov 30, 2007 7:57 AM, Mike Tintner [EMAIL PROTECTED] wrote:
 Ben: It seems to take tots a damn lot of trials to learn basic skills

 Sure. My point is partly that human learning must be pretty quantifiable in
 terms of number of times a given action is practised,

Definitely NOT ... it's very hard to quantify when a child is
practicing crawling
versus just rehearsing the component arm/leg movements, wiggling around,
etc.  I can imagine that quantifying this sort of thing in a really meaningful
way must be fairly difficult...

  I wonder whether
 anyone's counting.

I agree it's a worthwhile effort, though.  I don't think anyone has counted this
sort of thing because it would require constant surveillance of the child.

The data being gathered in Deb Roy's Human Speechome project should
actually be useful for this -- he's got a video camera on his young
child nearly
24 hours a day...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70743699-cc240c


Re: FW: [agi] AGI DARPA-style

2007-11-30 Thread Benjamin Goertzel
Yeah, I've been following that for a while.  There are some very smart
people involved, and it's quite possible they'll make a useful
software tool, but I don't feel they have a really viable unified
cognitive architecture.  It's the sort of architecture where different
components are written in different programming languages based on
unrelated ideas and are hooked together in an overall architecture,
interacting with each other as black boxes.  No emergent intelligence
via inter-component interactions is likely to arise in this
approach  And of course the lack of embodiment makes any solution
to the symbol-grounding problem unlikely to emerge...

-- Ben

On Nov 30, 2007 12:59 PM, Ed Porter [EMAIL PROTECTED] wrote:
 Also checkout http://caloproject.sri.com/publications/ for a list of CALO
 related publications

 Ed Porter


 -Original Message-
 From: Ed Porter [mailto:[EMAIL PROTECTED]
 Sent: Friday, November 30, 2007 12:58 PM
 To: 'agi@v2.listbox.com'
 Subject: RE: [agi] AGI DARPA-style

 Checkout AGI DARPA-style:

 Software That Learns from Users-- A massive AI project called CALO could
 revolutionize machine learning at
 http://www.technologyreview.com/Infotech/19782/?a=f

 Ed Porter

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70909538-e0531f


Re: [agi] Funding AGI research

2007-11-29 Thread Benjamin Goertzel

 [What related principles govern the Novamente's figure's trial and error
 learning of how to pick up a ball?]

Pure trial and error learning is really slow though... we are now
relying on a combination of

-- reinforcement from a teacher
-- imitation of others' behavior
-- trial and error
-- active correction of wrong behavior by a teacher

ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70641251-aaef7a


Re: [agi] Funding AGI research

2007-11-29 Thread Benjamin Goertzel
On Nov 29, 2007 11:35 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Presumably, human learning isn't that slow though - if you simply count the
 number of attempts made before any given movement is mastered at a basic
 level (.e.g crawling/ walking/ grasping/ tennis forehand etc)? My guess
 would be that, for all the frustrations involved, we need relatively few
 attempts. Maybe in the hundreds or thousands at most?

It seems to take tots a damn lot of trials to learn basic skills, and we have
plenty of inductive bias in our evolutionary wiring...

 But then it seems increasingly clear that we use maps/ graphics/ schemas to
 guide our movements -  have you read the latest Blakeslee book on body maps?

So does Novamente, it uses an internal simulation-world (among other
mechanism)... but that doesn't
magically make learning rapid, though it makes it more tractable...

ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70644788-023e28


Re: Re[12]: [agi] Funding AGI research

2007-11-29 Thread Benjamin Goertzel
On Nov 30, 2007 12:03 AM, Dennis Gorelik [EMAIL PROTECTED] wrote:
 Benjamin,

  That proves my point [that AGI project can be successfully split
  into smaller narrow AI subprojects], right?

  Yes, but it's a largely irrelevant point.  Because building a narrow-AI
  system in an AGI-compatible way is HARDER than building that same
  narrow-AI component in a non-AGI-compatible way.

 Even if this is the case (which is not) that would simply mean several
 development steps:
 1) Develop narrow AI with non-reusable AI component and get rewarded
 for that (because it would be useful system by itself).

Obviously, most researchers who have developed useful narrow-AI
components have not gotten rich from it.  The nature of our economy and
society is such that most scientific and technical innovators are not
dramatically
financially rewarded.

 2) Refactor non-reusable AI component into reusable AI component and
 get rewarded for that (because it would reusable component for sale).
 3) Apply reusable AI component in AGI and get rewarded for that.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70648456-e5f42e


Re: Re[12]: [agi] Funding AGI research

2007-11-29 Thread Benjamin Goertzel
 So far only researchers/developers who picked narrow-AI approach
 accomplished something useful for AGI.
 E.g.: Google, computer languages, network protocols, databases.

These are tools that are useful for AGI RD but so are computer
monitors, silicon chips, and desk chairs.  Being a useful tool for AGI
RD does not make something constitute AGI RD.

I do note that I myself have done (and am doing) plenty of narrow AI
work in parallel with AGI work.  So I'm not arguing against narrow AI
nor stating that narrow AI is irrelevant to AGI.  But your view of the
interrelationship seems extremely oversimplified to me.  If it were
as simple as you're saying, I imagine we'd have human-level AGI
already, as we have loads of decent narrow-AI's for various tasks.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70647705-610230


Re: Re[10]: [agi] Funding AGI research

2007-11-28 Thread Benjamin Goertzel
 EDI must admit, I have never heard cortical column described as
 containing 10^5 neurons.  The figure I have commonly seen is 10^2 neurons
 for a cortical column, although my understanding is that the actual number
 could be either less or more.  I guess the 10^5 figure would relate to
 so-called hypercolumns.

The term cortical column is vague

http://en.wikipedia.org/wiki/Cortical_column

There are minicolumns (around 100 neurons each) and hypercolumns
(around 100 minicolumns each).  Both are called columns..

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=69432075-16cf74


Re: Re[4]: [agi] Funding AGI research

2007-11-27 Thread Benjamin Goertzel
  Still, this is the most
  resource-intensive part of
  the Novamente system (the part that's most likely to require
  supercomputers to
  achieve human-level AI).


 Why is it the most resource intensive, is it the evolutionary computational
 cost? Is this where MOSES is used?

Correct, this is one place that MOSES is used...

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=69229200-f1b2ef


Re: Re[10]: [agi] Funding AGI research

2007-11-27 Thread Benjamin Goertzel
  Nearly any AGI component can be used within a narrow AI,

 That proves my point [that AGI project can be successfully split
 into smaller narrow AI subprojects], right?

Yes, but it's a largely irrelevant point.  Because building a narrow-AI
system in an AGI-compatible way is HARDER than building that same
narrow-AI component in a non-AGI-compatible way.

So, given the pressures of commerce and academia, people who are
motivated to make narrow-AI for its own sake, will almost never create
narrow-AI components that are useful for AGI.

And, anyone who creates narrow-AI components with an AGI outlook,
will have a large disadvantage in the competition to create optimal
narrow-AI systems given limited time and financial resources.

 Still, AGI-oriented researcher can pick appropriate narrow AI projects
 in a such way that:
 1) Narrow AI project will be considerably less complex than full AGI
 project.
 2) Narrow AI project will be useful by itself.
 3) Narrow AI project will be an important building block for the full
 AGI project.

 Would you agree that splitting very complex and big project into
 meaningful parts considerably improves chances of success?

Yes, sure ... but demanding that these meaningful parts

-- be economically viable

and/or

-- beat competing, somewhat-similar components in competitions

dramatically DECREASES chances of success ...

That is the problem.

An AGI may be built out of narrow-AI components, but these narrow-AI
components must be architected for AGI-integration, which is a lot of
extra work; and considered as standalone narrow-AI components, they
may not outperform other similar narrow-Ai components NOT intended
for AGI-integration...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=69277648-780726


Re: Re[8]: [agi] Funding AGI research

2007-11-27 Thread Benjamin Goertzel
 My claim is that it's possible [and necessary] to split massive amount
 of work that has to be done for AGI into smaller narrow AI chunks in
 such a way that every narrow AI chunk has it's own business meaning
 and can pay for itself.

You have not addressed my claim, which has massive evidence in the
history of AI research to date, that narrow AI chunks with AGI compatibility
are generally much harder to build than narrow AI chunks intended purely for
standalone performance, and hence will very rarely be the best economic
choice if one's goal is to make a narrow-AI chunk serving some practical
application within (the usual) tight time and cost constraints.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=69291210-f650cd


Re: Re[4]: [agi] Funding AGI research

2007-11-26 Thread Benjamin Goertzel
Well, there is a discipline of computer science devoted to automatic
programming, i.e. synthesizing software based on specifications of
desired functionality.

State of the art is:

-- Just barely, researchers have recently gotten automated
program learning to synthesize an nlogn sorting algorithm based on the goal
of sorting a large set of lists as rapidly as possible...

-- OTOH, automatic synthesis of logic circuits automatically carrying out
various tasks is now a fairly refined science, see e.g. Koza's GP III book

All in all we are nowhere near having AI software that can automatically
synthesize large, complex software programs.

Automated program learning is part of the Novamente system but the architecture
is designed so that only small programs need to be learned, carrying
out particular
internal or external tasks/functions.  Still, this is the most
resource-intensive part of
the Novamente system (the part that's most likely to require supercomputers to
achieve human-level AI).

-- Ben

On Nov 26, 2007 7:14 AM, Mike Tintner [EMAIL PROTECTED] wrote:
 John: I kind of like the idea of building software that then builds AGI.

 What are the best current examples of (to any extent) self-building software
 ?


 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=68550543-6ccd8f


Re: Re[6]: [agi] Funding AGI research

2007-11-25 Thread Benjamin Goertzel
Linas:
 I find it telling that no one is saying I've got the code, I just need to
 scale it up
 1000-fold to make it impressive ...

Yes, that's an accurate comment.  Novamente will hopefully reach that
point in a few years.

For now, we will need (and use) a lotta machines for commercial product
deployment purposes.

But for RD purposes, it's all about solving a large number of moderate-sized
computer science and AI research problems, that are connected together via
the overall NM AGI design.  Once these problems are all worked through and
we have a completed Novamente codebase then we will be far better able
to evaluate what our hardware requirements actually are.  I am pretty sure
they will be large.  But right now, having masses of hardware wouldn't
accelerate
our progress all that much.  What is useful to us is money to pay the
right brains
to solve the long list of apparently-not-that-huge technical problems between
here and a completed Novamente system.  And of course there is always a nonzero
risk that one of these apparently-not-that-huge technical problems will turn out
to be huge; but, a lot of thinking has gone on over a number of years
in a serious
attempt to avoid this...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=68394009-e1d34e


Re: Re[6]: [agi] Funding AGI research

2007-11-25 Thread Benjamin Goertzel
Cassimatis's system is an interesting research system ... it doesn't yet have
lotsa demonstrated practical functionality, if that's what you mean by
work...

He wants to take a bunch of disparately-functioning agents, and hook
them together
into a common framework using a common logical interlingua

I think this approach is  unlikely to lead to the various agents involved
quelling, rather than exacerbating, each others' intrinsic combinatorial
explosions...

I think it is unlikely to lead to sufficiently coherent system-wide
emergent dynamics
to give rise to an effective phenomenal self ...

But given the primitive state of AGI theory at the moment, I can't
*prove* that these
complaints are correct, of course...

-- Ben G

On Nov 25, 2007 7:22 PM, Edward Porter [EMAIL PROTECTED] wrote:



 A few days ago there was some discussion on this list about the potential
 usefulness of narrow AI to AGI.



 Nick Cassimatis, who is speaking at AGI 2008, has something he calls
 Polyscheme which is described partially at the following AGIRI link:
 http://www.agiri.org/workshop/Cassimatis.ppt



 It appears to use what are arguably narrow AI modules in a coordinated
 manner to achieve AGI.



 Is this a correct interpretation?  Does it work?  And, if so, how?



 I can imagine how multiple narrow AI's could be used to create a more
 general AGI if there were some AGI glue to represent and learn the
 relationships between the different AGI modalities.  Cassimatis mentions
 tying these different modalities together using relations involving times,
 space, events, identity, causality and belief.  (But I don't remember much
 description of how it does it.)



 Arguably these are enough dimensions to create generalized representations,
 provided there is some generalized means for representing all the important
 states and representations in each of the Narrow AI modalities and the
 relationships between them in each of these dimensions and compositions and
 generalizations formed from such relationships.



 Is that what Cassimatis is talking about?



 Ed Porter 
  This list is sponsored by AGIRI: http://www.agiri.org/email

 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=68488723-9c2917


Re: Re[4]: [agi] Funding AGI research

2007-11-20 Thread Benjamin Goertzel


  Are you asking for success stories regarding research funding in any
 domain,
  or regarding research funding in AGI?

 Any domain, please.



OK, so your suggestion is that research funding, in itself, is worthless in
any domain?

I don't really have time to pursue this kind of silly argument, sorry...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67380099-81533c

Re: Re[6]: [agi] Funding AGI research

2007-11-20 Thread Benjamin Goertzel



 No.
 My point is that massive funding without having a prototype prior to
 funding is worthless most of the times.
 If prototype cannot be created at reasonably low cost then fully working
 product
 most likely cannot be created even with massive funding.



Well, this seems to dissolve into a set of vagaries...

How much funding is massive varies from domain to domain.  E.g. it's hard
to
do anything in nanotech without really expensive machinery.  For AGI, $10M
is a lot of money, because the main cost is staff salaries, plus commodity
hardware.  For nanotech, $10M isn't all that much, since specialized
hardware is needed
to do many kinds of serious work.

And, what counts as a prototype often depends on one's theoretical
framework.  Do you
consider there to have been a prototype for the first atom bomb?  I don't
think there was,
but there were preliminary experiments that, given the context of the
framework of theoretical
physics, made the workability of the atom bomb seem plausible.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67466565-8e64f2

Re: Re[6]: [agi] Funding AGI research

2007-11-20 Thread Benjamin Goertzel
On Nov 20, 2007 11:22 PM, Dennis Gorelik [EMAIL PROTECTED] wrote:

 Jiri,

  AGI is IMO possible now but requires very different approach than narrow
 AI.

 AGI requires properly tune some existing narrow AI technologies,
 combine them together and may be add couple of more.

 That's massive amount of work, but most AGI research and development
 can be shared with narrow AI research and development.



Unfortunately, I don't think this is quite true...

There is plenty overlap btw AGI and narrow AI but not as much as you
suggest...

Also:
Narrow AI technologies are not meant to be combined together, so to build
AGI
out of narrow-AI components, you need to create narrow-AI components that
are
**specifically designed** for integration into an AGI system...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67471374-384e50

Re: Re[8]: [agi] Funding AGI research

2007-11-20 Thread Benjamin Goertzel



 Could you describe a piece of technology that simultaneously:
 - Is required for AGI.
 - Cannot be required part of any useful narrow AI.


The key to your statement is the word required

Nearly any AGI component can be used within a narrow AI, but, the problem
is,
it's usually a bunch easier to make narrow AI's using components that don't
have any AGI value...





 Another way to go -- use existing narrow AIs as prototypes when
 building AGI.


I don't really accept any narrow-AI as a prototype for an AGI.

This is an example of what I meant when I said that what counts as a
prototype
is theory-dependent, I suppose...

I think there is loads of evidence that narrow-AI prowess does not imply
AGI prowess, so that a narrow-AI can't be considered a prototype for an
AGI..

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67477308-6a7310

Re: Re[2]: [agi] Funding AGI research

2007-11-18 Thread Benjamin Goertzel
On Nov 18, 2007 12:50 AM, Dennis Gorelik [EMAIL PROTECTED] wrote:

 Benjamin,

 Do you have any success stories of such research funding in the last
 20 years?
 Something that resulted in useful accomplishments.


Are you asking for success stories regarding research funding in any domain,
or regarding research funding in AGI?

Obviously, there are not yet any real success stories regarding research
funding in AGI.  But this is because

-- Due to advances in computing hardware and cognitive science, AGI is
just now (meaning, in the last 3-8 years, say), for the first time, at a
stage
where serious advances can be made

-- The small amount of $$ put into AGI research has almost entirely focused
on an even smaller set of conceptual approaches (GOFAI),
which are deeply problematic for reasons already extensively discussed on
this list


There were no success stories regarding manned spaceflight before Apollo ...
there
were no success stories for genome-sequencing before it was first done, etc.

You seem to have gone from

-- advocating funding only for development rather than research

to

-- advocating funding only for research in mature fields where there have
already
been dramatic successes

I don't think either of these is an adequate funding strategy.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66341486-50b99a

Re: [agi] Funding AGI research

2007-11-18 Thread Benjamin Goertzel
Novamente as a whole is definitely a research project, albeit one with
a very well fleshed out research plan.  I have a strong hypothesis about
how the project will come out, and arguments in favor of this hypothesis;
but I don't have the level of confidence I'd have in, say, the stability
of a bridge built according to known principles of mechanical engineering;
or the functionality of a word-processing program built according to
good specs.

On the other hand our virtual animals product development is mostly
(difficult) software development, with a few bits of (strictly delimited)
research contained therein  The outcome there is way more
determinate.

ben

On Nov 18, 2007 1:15 PM, Joshua Fox [EMAIL PROTECTED] wrote:

  What you are advocating is to fund Development but not Research.
 Ben,

 I favor funding for both R and D.

 Would you put the Novamente project in the R or the D phase? If a
 prototype is a good way to distinguish the two, is there a prototype
 for Novamente? And if it is still in the research phase, is it
 reasonable to give time estimates for a project which is still in that
 phase?

 Joshua

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66343974-5ff050

Re: [agi] Funding AGI research

2007-11-18 Thread Benjamin Goertzel
  Proactively minimizing risk in as
 many areas as possible make a venture much more salable, but most AI
 ventures tend to be very apparently risky at many levels that have no
 relation to the AI research per se and the inability of these
 ventures to minimize all that unnecessary risk is a giant mark
 against them.


 Cheers,

 J. Andrew Rogers



I've often heard it said that VC's consider three kinds of risk

-- people
-- market
-- technology

and if you have risk in more than one of those areas, you
won't get funded.

Of course like all such saws this is a big simplification but it has
some relevance to this context anyway.

AGI is always going to be viewed as a major technology risk,
unless one comes into the fundraising process with an extremely
strong prototype (and maybe even then).

Mitigaging the people-risk requires getting experienced businesspeople
on board, which is generically difficult for an AGI company because of the
bad
reputation AI has.

Mitigating the market risk means finding a market niche where incremental
work toward AGI is of dramatically more economic value than narrow-AI
technology.  I think this is really the hard part.  (Or, as an alternative,
it involves
finding gullible investors and making them believe that the incremental work
toward
AGI will be of dramatically more economic value than narrow-AI technology --
but most good AGI researchers don't have taste for this particular flavor of
dishonesty...)

In most practical app areas,
you can make something that can be spun to customers as almost-as-good-as-
a-fractional-AGI, via using clever narrow-AI techniques.  The obvious
example
is Google which isn't nearly as good as a good NLP search engine will be,
but
is almost-as-useful as a partially-mature NLP search engine, and was a lot
easier/cheaper to prototype and initially roll out.

As I've said before, I am bullish on virtual worlds and gaming as an area
where
early-stage AGI tech can have dramatically more economic value than cleverly
crafted narrow-AI.   Humanoid robotics is clearly another such area, but a
trickier
area to get started in right now.  But I'm not saying these are the only
examples.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66346729-733843

Re: [agi] Funding AGI research

2007-11-18 Thread Benjamin Goertzel
  I have not heard a *creative* new idea
 here that directly addresses and shows the power to solve even in part the
 problem of creating general intelligence.



To be quite frank, the most creative and original ideas inside the Novamente
design are quite technical.  I suspect you don't have the background to
understand
them.

However, I can see there's something else underlying your remarks: a sort of
mystification of the power of intelligence.  I think you'll be shocked
when the workings
of the brain are finally unveiled and what is revealed is that there are a
lot of cleverly
evolved modules carrying out various particular functions, wired together in
an architecture
designed to enable their synergy ... and some of them wired to observe and
improve
one another in virtuous cycles, using their particular recognition/learning
algorithms ...
but no magic trick, no super-secret algorithm of thought ...

In short, I think you'll look at the brain and say hhmmpph, that's not very
creative  ;-p

I think you are looking for an essence of intelligence that is just not
there.  The essence
is in the emergent phenomena that come about when a sufficiently rich set of
components
is wired together in a way allowing them to inter-adapt and inter-improve in
the context of
controlling an agent that needs to achieve complex goals in a complex
environment.  There is
no magic trick of thought at the center of it all, that is just waiting
for some Einstein of
Cognition to unveil it.

But you can keep waiting for this magic trick to be revealed, if you like,
while some of the rest
of us work on actually creating AGI on rational bases ... and more or less
gracefully
putting up with your continual complaints
that because we're not done yet, our approaches must obviously be worthless
;-)

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66437203-2462b5

Re: [agi] Multi-agent learning

2007-11-18 Thread Benjamin Goertzel
On Nov 18, 2007 6:45 PM, Lukasz Stafiniak [EMAIL PROTECTED] wrote:

 Ben,

 Have you already considered what form of multi-agent epistemic logic
 (or whatever extension to PLN) Novamente will use to merge knowledge
 from different avatars?


Well, standard PLN handles this in principle via ContextLinks which allow
contextual (e.g. perspectival) specialization of knowledge.

However, **control** of this kind of process is the tricky thing, really..
melding knowledge from different minds is computationally costly and one
has to decide when one wants to do it...




 Related, do you consider some form of privacy policy, or do you put
 the responsibility for not leaking secrets on avatars' owners?


For our initial virtual animals:
There is a collective memory among all the AI animals, and also an
individual
memory for each animal, and there's a policy for deciding what goes in
which...

In later products we will likely allow more flexibility, and let users
control
how much they want their agents to share and/or take from the collective
agent unconscious.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66449659-1f8a96

Re: [agi] Funding AGI research

2007-11-18 Thread Benjamin Goertzel
Hi,



 The majority of VC's do, as you say, want a technology that is sewn up,
 from the point of view of technical feasibility.  But this is not always
 true.  There is always a gray area at the fringe of feasibility where
 the last set of questions has not been *fully* answered before money is
 thrown at it.  I believe this happened in a number of projects during
 the dot-com insanity.


A lot of things were possible in that period that aren't possible during
normal business conditions...



 If I am right in this last idea, VCs have a stark choice:  if they want
 AGI, they have to relax their insistence on a project that does not have
 that last research step.  If they insist on something stronger, they
 can kiss goodbye to ever getting an AGI.



Well, VC's don't give a crap about AGI, at least not in their capacity
as VC's.  They just want to make $$ in a certain way, according to a certain
risk profile...

So it is only via an unusual combination of factors that a VC is going to
invest in an AGI project.

And of course, if this unusual combination occurs ONCE, and yields
successful
results ... then every VC will want to jump on board and invest in AGI as
quickly as possible and at a generous valuation ;-)

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66454614-75c7de

Re: [agi] Funding AGI research

2007-11-18 Thread Benjamin Goertzel
There are a lot of worthwhile points in your post, and a number of things I
don't fully agree with, but I don't have time to argue them all right now...

Instead I'll just pick two points:

1)
The Babbages and Leibnizes of a given historical period are often visible
only in HINDSIGHT.  You can't say that there are no Babbages or Leibnizes of
AGI around right now ... there could be some on this very list, unrecognized
by you, but who will be recognized by all a few decades from now...

2)
I don't think it's true that Babbage's or Leibniz's machines were specced
out so much better than, say, Novamente.  Relative to the technology of
their time, plenty of details were left unspecified -- it just seems obvious
to us now, in hindsight, how to fill in those details.  It wasn't obvious to
all their contemporaries.  And while, in hindsight, the workability of their
machines seems obvious to us, to their contemporaries it must have seemed
like the workability of their machines required a huge leap of intuition.
They had no rigorous mathematical proof of the workability of their
machines, nor did they have working prototypes.  They had conceptual
arguments that pushed the boundaries of the science of their times, and
seemed like nonsense to many of their contemporaries.

3)
I don't agree that AGI is primarily a computer science problem, any more
than, say, building a car is primarily a metalworking problem.  AGI requires
computer science problems to be solved as part of its solution; but IMO the
essence of AGI-creation is not computer science.  This seems to be a genuine
difference of scientific intuition btw the two of us.  Plenty of others whom
I respect appear to share the same opinion as you.


-- Ben G



On Nov 18, 2007 11:04 PM, J. Andrew Rogers [EMAIL PROTECTED]
wrote:


 On Nov 18, 2007, at 7:06 PM, Benjamin Goertzel wrote:
 
  Navigating complex social and business situations requires a quite
  different set of capabilities than creating AGI.  Potentially they
  could
  be combined in the same person, but one certainly can't assume that
  would be the case.


 I completely agree.  But if we are to assume that AGI requires some
 respectable amount of funding, as seems to be posited by many people,
 then it seems that it will require a person with broader skills than
 the stereotypical computer science nerd.  In that case, maybe AGI is
 not accessible to someone who is unwilling or unable to be anything
 but a computer science nerd.  As if the pool of viable AGI
 researchers was not small enough already.


  And, I don't think it's fair to say that if you're smart enough to
  solve AGI,
  you should be able to quickly make a pile of money doing some kind of
  more marketable technical-computer-science, and fund the AGI
  yourself.
 
  This assumes a lot of things, for instance that AGI is the same
  sort of
  problem as technical-computer-science problems, so that if someone can
  do AGI better than others, they must be able to do technical-
  computer-science
  better than others too.  But I actually don't think this is true; I
  think that AGI
  demands a different sort of thinking.


 I'm not so sure about this.  All hard problems seem to receive
 similar sentiments until they are actually solved.  I do think that
 AGI is a relatively hard problem even among the hard problems, but
 there are other computer science problems that had thousands of pages
 of literature devoted to them without much progress that when they
 were solved by someone turned out to be relatively simple.  That
 20/20 hindsight thing.  To the extent that there is any special sauce
 in AGI, I expect it will look like one of these cases.

 Solving computer science problems is a pretty general skill, in part
 because it is a pretty shallow field in most important respects.  To
 use AI research as an example, it is composed of only a handful of
 fundamental ideas from which a myriad of derivatives and mashups have
 been created.  Most other problems in computer science have the same
 feature, and when problems get solved it is because someone looked at
 the handful of fundamentals and ignored the vast bodies of derivative
 products which add nothing new.  Vast quantities of research does not
 equate to a significant quantity of ideas.  AI is a little more
 complex than some other topics, but is still far simpler at the level
 of fundamentals than some people make it out to be.


 People are incapable of solving AGI for the same reason they are
 incapable of solving any of the other interesting computer science
 problems, which was the point I was making obliquely.  It is not a
 different skill, it is the same skill that the vast majority of all
 computer science people are incompetent at.  And AGI is particularly
 hard problem, even for that tiny minority of people capable of
 solving real problems in computer science.

 If you cannot solve interesting computer science problems that are
 likely to be simpler, then it is improbable

Re: [agi] Funding AGI research

2007-11-18 Thread Benjamin Goertzel
On Nov 18, 2007 11:24 PM, Benjamin Goertzel [EMAIL PROTECTED] wrote:


 There are a lot of worthwhile points in your post, and a number of things
 I don't fully agree with, but I don't have time to argue them all right
 now...

 Instead I'll just pick two points:


er, looks like that was three ;-)




 1)
 The Babbages and Leibnizes of a given historical period are often visible
 only in HINDSIGHT.  You can't say that there are no Babbages or Leibnizes of
 AGI around right now ... there could be some on this very list, unrecognized
 by you, but who will be recognized by all a few decades from now...

 2)
 I don't think it's true that Babbage's or Leibniz's machines were specced
 out so much better than, say, Novamente.  Relative to the technology of
 their time, plenty of details were left unspecified -- it just seems obvious
 to us now, in hindsight, how to fill in those details.  It wasn't obvious to
 all their contemporaries.  And while, in hindsight, the workability of their
 machines seems obvious to us, to their contemporaries it must have seemed
 like the workability of their machines required a huge leap of intuition.
 They had no rigorous mathematical proof of the workability of their
 machines, nor did they have working prototypes.  They had conceptual
 arguments that pushed the boundaries of the science of their times, and
 seemed like nonsense to many of their contemporaries.

 3)
 I don't agree that AGI is primarily a computer science problem, any more
 than, say, building a car is primarily a metalworking problem.  AGI requires
 computer science problems to be solved as part of its solution; but IMO the
 essence of AGI-creation is not computer science.  This seems to be a genuine
 difference of scientific intuition btw the two of us.  Plenty of others whom
 I respect appear to share the same opinion as you.


 -- Ben G




 On Nov 18, 2007 11:04 PM, J. Andrew Rogers [EMAIL PROTECTED]
 wrote:

 
  On Nov 18, 2007, at 7:06 PM, Benjamin Goertzel wrote:
  
   Navigating complex social and business situations requires a quite
   different set of capabilities than creating AGI.  Potentially they
   could
   be combined in the same person, but one certainly can't assume that
   would be the case.
 
 
  I completely agree.  But if we are to assume that AGI requires some
  respectable amount of funding, as seems to be posited by many people,
  then it seems that it will require a person with broader skills than
  the stereotypical computer science nerd.  In that case, maybe AGI is
  not accessible to someone who is unwilling or unable to be anything
  but a computer science nerd.  As if the pool of viable AGI
  researchers was not small enough already.
 
 
   And, I don't think it's fair to say that if you're smart enough to
   solve AGI,
   you should be able to quickly make a pile of money doing some kind of
   more marketable technical-computer-science, and fund the AGI
   yourself.
  
   This assumes a lot of things, for instance that AGI is the same
   sort of
   problem as technical-computer-science problems, so that if someone can
 
   do AGI better than others, they must be able to do technical-
   computer-science
   better than others too.  But I actually don't think this is true; I
   think that AGI
   demands a different sort of thinking.
 
 
  I'm not so sure about this.  All hard problems seem to receive
  similar sentiments until they are actually solved.  I do think that
  AGI is a relatively hard problem even among the hard problems, but
  there are other computer science problems that had thousands of pages
  of literature devoted to them without much progress that when they
  were solved by someone turned out to be relatively simple.  That
  20/20 hindsight thing.  To the extent that there is any special sauce
  in AGI, I expect it will look like one of these cases.
 
  Solving computer science problems is a pretty general skill, in part
  because it is a pretty shallow field in most important respects.  To
  use AI research as an example, it is composed of only a handful of
  fundamental ideas from which a myriad of derivatives and mashups have
  been created.  Most other problems in computer science have the same
  feature, and when problems get solved it is because someone looked at
  the handful of fundamentals and ignored the vast bodies of derivative
  products which add nothing new.  Vast quantities of research does not
  equate to a significant quantity of ideas.  AI is a little more
  complex than some other topics, but is still far simpler at the level
  of fundamentals than some people make it out to be.
 
 
  People are incapable of solving AGI for the same reason they are
  incapable of solving any of the other interesting computer science
  problems, which was the point I was making obliquely.  It is not a
  different skill, it is the same skill that the vast majority of all
  computer science people are incompetent at.  And AGI is particularly
  hard problem, even

Re: [agi] Funding AGI research

2007-11-17 Thread Benjamin Goertzel
On Nov 17, 2007 1:08 PM, Dennis Gorelik [EMAIL PROTECTED] wrote:

 Jiri,

 Give $1 for the research to who?
 Research team can easily eat millions $$$ without producing any useful
 results.
 If you just randomly pick researchers for investment, your chances to
 get any useful outcome from the project is close to zero.

 The best investing practise is to invest only into such teams that
 produced working prototype already.



Dennis,

What you are advocating is to fund Development but not Research.

I think the history of science in the last century shows that funding
Research as well as Development can be extremely valuable.

Of course, each individual Research team has only a small chance
of success if viewed from the big-picture perspective; but the $300M/year
annual funding that was posited could fund a lot of AGI Research teams.
Let's say it was used to fund 300 research teams at a rate of $1M/year
per team.  The odds are high that more than one of these teams would produce
breakthrough discoveries within a 5-10 year period, even though betting
on any individual team would be a long-shot.

The US Research funding establishment is doing a good job of funding
many sorts of research, but not AGI or life-extension.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66242905-752b8c

Re: [agi] Funding AGI research

2007-11-17 Thread Benjamin Goertzel
Richard,

Though we have theoretical disagreements, I largely agree with your
analysis of the value of prototypes for AGI.

Experience has shown repeatedly that prototypes displaying apparently
intelligent behavior in various domains are very frequently dead-ends,
because they embody various sorts of cheating.

And, if AGI really is a complex emergent phenomenon that requires
a certain sort of large complex system in order to come about, then one
would
not expect that any kind of cheap, small-scale prototype would be able
to demonstrate it.

As large-scale funding requires impressive prototypes, one is then faced
with an irritating task of creating prototypes that fulfill the largely
unrelated
goals of

-- looking impressive to investors who want to see prototypes
-- actually being meaningful steps on the path to AGI

It is actually surprisingly difficult to find ways to fulfill these two
goals
at the same time.  I'm hoping we'll be there, with Novamente, sometime
in late 2008 or early 2009.  I don't think our initial virtual animals will
be impressive
enough as AGI to qualify -- though they'll be really cool teachable
animals!! --
but, I think virtual agents with language learning facility will pass the
threshold...

But even once we get there (assuming we do), my own faith in the system
shown-off
as a path to AGI will be largely uncorrelated with its impressiveness as a
prototype...

ben




On Nov 17, 2007 1:41 PM, Richard Loosemore [EMAIL PROTECTED] wrote:

 Dennis Gorelik wrote:
  Jiri,
 
  Give $1 for the research to who?
  Research team can easily eat millions $$$ without producing any useful
  results.
  If you just randomly pick researchers for investment, your chances to
  get any useful outcome from the project is close to zero.
 
  The best investing practise is to invest only into such teams that
  produced working prototype already.
  Serious funding is usually helpful only to scale prototype up.
  (See how it worked out for Google, for example).
 
  So far there is no working prototype of AGI yet, therefore there is no
  point to invest.
 
  On the other hand some narrow AI teams already produced some useful
  results. Such teams deserve investments.
  When narrow AI field is mature enough -- making next step to AGI would
  be possible for self-funding AGI research team.

 Although this seems like a reasonable stance, I don't think it is a
 strategy that will lead the world to the fast development (or perhaps
 any development) of a real AGI.  Allow me to explain why I think this.

 I agree you would not just pick researchers at random, but on the other
 hand if you insist on a team with a working prototype this might well
 be a disaster.

 I am in a position to use massive investment straight away (and I have a
 project plan that says how), but the specific technical analysis of the
 AGI problem that I have made indicates that nothing like a 'prototype'
 is even possible until after a massive amount of up-front effort.  There
 are things we can do ahead of time (and some of those are underway), but
 if anyone asks for a prototype that does some fraction of the task, I
 can only point to the technical analysis and ask the investor to
 understand why this is not possible.

 Catch 22.  No prototype, no investment;  no investment, no prototype.

 Investors are leery of sorry, no prototype! claims (with good reason,
 generally) but they are also not tech-savvy enough to comprehend the
 technical analysis that tells them that they should make an exception in
 this case.  And even worse, the technical community (for reasons I have
 explained, to general annoyance ;-) ) has reasons for disliking the
 particular technical analysis I have offered.

 If I turn out to be right in my analysis, none of the people who have
 what they claim to be prototypes will actually reach the goal of a
 viable AGI.  (They disagree, of course!).



 Richard Loosemore







 
  Wednesday, October 31, 2007, 11:50:12 PM, you wrote:
 
  I believe AGI does need promoting. And it's IMO similar with the
  immortality research some of the Novamente folks are involved in. It's
  just unbelievable how much money (and other resources) are being used
  for all kinds of nonsense/insignificant projects worldwide. I wish
  every American gave just $1 for AGI and $1 for immortality research.
  Imagine what this money could for all of us (if used wisely).
  Unfortunately, people will rather spend the money for their popcorn in
  a cinema.
 
 
  Godlike intelligence? :) Ok, here is how I see it: If we survive, I
  believe we will eventually get plugged into some sort of pleasure
  machine and we will not care about intelligence at all. Intelligence
  is a useless tool when there are no problems and no goals to think
  about. We don't really want any goals/problems in our minds.
  Basically, the goal is to not have goal(s) and safely experience as
  intense pleasure as the available design allows for as long as
  possible. AGI could be 

Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Benjamin Goertzel
About PolyWorld and Alife in general...

I remember playing with PolyWorld 10 years ago or so  And, I had a grad
student at Uni. of Western Australia build a similar system, back in my
Perth days... (it was called SEE, for Simple Evolving Ecology.  We never
published anything on it, as I left Australia in the middle of the
research...)

But after fiddling with stuff like this a while, it becomes clear that, just
as each GOFAI or machine learning program can be pushed so far and no
further; similarly each Alife program can be pushed so far and no further...

One of the most fascinatng busts in that area was Tom Ray's attempt to
induce robust virtual evolution of multicellular life.  I forget the name of
his project but he was doing it at ATR in Japan.  It was a follow-up to his
excellently successful Tierra program, which was the first to demonstrate
biology-like reproduction in artificial organisms  Anyway Tom's attempt
and many others to to beyond the complexity threshold observed in Alife
programs did not pan out...

Overall, I came away from my flirtation with Alife with the impression that
it was doomed due to the lack of a viable artificial chemistry (chemistry
arguably being the source of the richness of real biology).

So, there was some cool work on artificial chemistry of a sort, done by
Walter Fontana and many others, which I don't remember very well...

The deep question I came away with was: What exactly are the **abstract
properties** of the periodic table of elements that allows it to give rise
to chemical compounds and ensuing biological structures with so much
complexity?

And then I decided Alife was not gonna be a shortcut and turned wholly to AI
insetad ;-)

Thing is, I'm sure Alife can work, but the computational requirements have
gotta be way way bigger than for AI.  And conceptually, it doesn't seem like
Alife is really a shortcut -- because puzzling out the requirements that
artificial chemistry needs to have, in order to support robust artificial
biology, seems just as hard or harder than building a simulated brain or a
non-brain-based AGI.  After all it's not like we know how real chemistry
gives rise to real biology yet --- the dynamics underlying protein-folding
remain ill-understood, etc. etc.

So I find this a deep and fascinating area of research (the borderline btw
artificial chemistry and artificial biology, more so than Alife proper), but
I doubt it's a shortcut to AGI ... though it would be cool to be proven
wrong ;-)

-- Ben G



On Nov 15, 2007 3:30 AM, Bob Mottram [EMAIL PROTECTED] wrote:

 Although I thought this was a good talk and I liked the fellow
 presenting it to me it seems fairly clear that little or no progress
 has been made in this area over the last decade or so.  In the early
 1990s I wrote somewhat similar simulations where agents had their own
 neural networks whose architecture was specified by a genetic
 algorithm, but just like the speaker I came up against similar
 problems.

 As the guy says it should be in principle possible to go all the way
 from simple types of creatures up to more complex ones, like humans.
 In practice though what tends to happen is that the complexity of the
 neural nets reaches a plateau from which little subsequent progress
 occurs.  Even after allowing the system to run for tens of thousands
 of generations not much of interest happens.

 I think the main problem here is the low complexity of the environment
 and the agents themselves.  In a real biological system there are all
 kinds of niches which can be exploited in a variety of ways, but in
 polyworld (and other similar simulations) it's all very homogeneous.
 Real biological creatures are coalitions of millions of cells, each of
 which is a chemical factory containing an abundance of nano machinery,
 each of which is a possible site for evolutionary change.  The sensory
 systems of real creatures are also far richer than simply being able
 to detect three colours (even molluscs can do better than this), and
 this is obviously a limiting factor upon the development of greater
 intelligence.



 On 15/11/2007, Jef Allbright [EMAIL PROTECTED] wrote:
  This may be of interest to the group.
 
  http://video.google.com/videoplay?docid=-112735133685472483
 
 
  This presentation is about a potential shortcut to artificial
  intelligence by trading mind-design for world-design using artificial
  evolution. Evolutionary algorithms are a pump for turning CPU cycles
  into brain designs. With exponentially increasing CPU cycles while our
  understanding of intelligence is almost a flat-line, the evolutionary
  route to AI is a centerpiece of most Kurzweilian singularity
  scenarios. This talk introduces the Polyworld artificial life
  simulator as well as results from our ongoing attempt to evolve
  artificial intelligence and further the Singularity.
 
  Polyworld is the brain child of Apple Computer Distinguished Scientist
  Larry Yaeger, who remains the primary 

Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Benjamin Goertzel
I think that linguistic interaction with human beings is going to be what
lifts Second Life proto-AGI's beyond the glass ceiling...

Our first SL agents won't have language generation or language learning
capability, but I think that introducing it is really essential, esp. given
the limitations of SL as a purely physical environment...

ben

On Nov 15, 2007 1:38 PM, Bob Mottram [EMAIL PROTECTED] wrote:

 Which raises the question of whether the same complexity glass ceiling
 will be encountered when running AGI controlled agents within Second
 Life.  SL is probably more complex than polyworld, although that could
 be debatable depending upon your definition of complexity.  One factor
 which would raise the bar would be the additional baggage being
 introduced into the virtual world from the first life of human
 participants.


 On 15/11/2007, Bryan Bishop [EMAIL PROTECTED] wrote:
  On Thursday 15 November 2007 02:30, Bob Mottram wrote:
   I think the main problem here is the low complexity of the
   environment
 
  Complex programs can only be written in an environment capable of
  bearing that complexity:
 
  http://sl4.org/archive/0710/16880.html
 
  - Bryan
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;
 

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65511033-66e22b

Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Benjamin Goertzel
On Nov 15, 2007 8:57 PM, Bryan Bishop [EMAIL PROTECTED] wrote:

 On Thursday 15 November 2007 08:16, Benjamin Goertzel wrote:
  non-brain-based AGI. After all it's not like we know how real
  chemistry gives rise to real biology yet --- the dynamics underlying
  protein-folding remain ill-understood, etc. etc.

 Can anybody elaborate on the actual problems remaining (beyond etc.
 etc.-- which is appropriate from Ben who is most notably not a
 biochemist/chemist/bioinformatician)?


Hey -- That is a funny comment -- I've published a dozen bioinformatics
papers
in the last 5 years, and am CEO / Chief Scientist of a bioinformatics
company (Biomind LLC, www.biomind.com) 

I am no chemist but I'm pretty much an expert on analyzing microarray
and SNP data, and various other corners of bioinformatics, having introduced
some
funky new techniques into the field.  In fact my most popular research paper
is not on AGI but rather on Chronic Fatigue Syndrome -- it was the
first-ever
paper giving evidence for a (weak) genetic basis for CFS.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65715822-29017b

Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Benjamin Goertzel
No worries!! just wanted to clarify...

To address your question more usefully: There is soo much evidence
that chemistry is subtly important for biology in ways that are poorly
understood.

In neuroscience for instance the chemistry of synaptic transmission btw
neurons is still weakly understood, so we still don't know exactly how poor
a model the formal neuron used in computer science is  As a single
example you have both ionotropic and metabotropic glutamate receptors
along neurons ... whose synaptic transmission properties depend on
ambient chemistry in the intracellular medium in ways no one understands
really.. etc. etc. etc. ;-)

ben

On Nov 15, 2007 10:07 PM, Bryan Bishop [EMAIL PROTECTED] wrote:

 On Thursday 15 November 2007 20:02, Benjamin Goertzel wrote:
  On Nov 15, 2007 8:57 PM, Bryan Bishop [EMAIL PROTECTED] wrote:
   Can anybody elaborate on the actual problems remaining (beyond
   etc. etc.-- which is appropriate from Ben who is most notably not
   a biochemist/chemist/bioinformatician)?
 
  Hey -- That is a funny comment

 Oh my. This is a big, big mistake on my part. I am sorry. Please accept
 my apologies .. and the knowledge that my parenthetical comment no
 longer applies.

 - Bryan

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65726263-86dc00

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Benjamin Goertzel
Hi,




 No:  the real concept of lack of grounding is nothing so simple as the
 way you are using the word grounding.

 Lack of grounding makes an AGI fall flat on its face and not work.

 I can't summarize the grounding literature in one post.  (Though, heck,
 I have actually tried to do that in the past:  didn't do any good).



FYI, I have read the symbol-grounding literature (or a lot of it), and
generally
found it disappointingly lacking in useful content... though I do agree with
the basic point that non-linguistic grounding is extremely helpful for
effective
manipulation of linguistic entities...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64981284-09925d

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Benjamin Goertzel
Richard,



 So here I am, looking at this situation, and I see:

    AGI system intepretation (implicit in system use of it)
    Human programmer intepretation

 and I ask myself which one of these is the real interpretation?

 It matters, because they do not necessarily match up.


That is true, but in some cases they may approximate each other well..

In others, not...

This happens to be a pretty simple case, so the odds of a good
approximation seem high.



  The human
 programmer's intepretation has a massive impact on the system because
 all the inference and other mechanisms are built around the assumption
 that the probabilities mean a certain set of things.  You manipulate
 those p values, and your manipulations are based on assumptions about
 what they mean.



Well, the PLN inference engine's treatment of

ContextLink
home
InheritanceLink Bob_Yifu friend

is in no way tied to whether the system's implicit interpretation of the
ideas of home or friend are humanly natural, or humanly comprehensible.

The same inference rules will be applied to cases like

ContextLink
Node_66655
InheritanceLink Bob_Yifu Node_544

where the concepts involved have no humanly-comprehensible label.

It is true that the interpretation of ContextLink and InheritanceLink are
fixed
by the wiring of the system, in a general way (but what kinds of properties
are referred to by them may vary in a way dynamically determined by the
system).


 In order to completely ground the system, you need to let the system
 build its own symbols, yes, but that is only half the story:  if you
 still have a large component of the system that follows a
 programmer-imposed interpretation of things like probability values
 attached to facts, you have TWO sets of symbol-using mechanisms going
 on, and the system is not properly grounded (it is using both grounded
 and ungrounded symbols within one mechanism).



I don't think the system needs to learn its own probabilistic reasoning
rules
in order to be an AGI.  This, to me, is too much like requiring that a brain
needs
to learn its own methods for modulating the conductances of the bundles of
synapses linking between the neurons in cell assembly A and cell assembly B.

I don't see a problem with the AGI system having hard-wired probabilistic
inference rules, and hard-wired interpretations of probabilistic link
types.  But
the interpretation of any **particular** probabilistic relationship inside
the system, is relative
to the concepts and the empirical and conceptual relationships that the
system
has learned.

You may think that the brain learns its own uncertain inference rules based
on a
lower-level infrastructure that operates in terms entirely unconnected from
ideas
like uncertainty and inference.  I think this is wrong.  I think the brain's
uncertain
inference rules are the result, on the cell assembly level, of Hebbian
learning and
related effects on the neuron/synapse level.  So I think the brain's basic
uncertain
inference rules are wired-in, just as Novamente's are, though of course
using
a radically different infrastructure.

Ultimately an AGI system needs to learn its own reasoning rules and
radically
modify and improve itself, if it's going to become strongly superhuman!  But
that is
not where we need to start...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64998317-8c4281

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Benjamin Goertzel
On Nov 14, 2007 1:36 PM, Mike Tintner [EMAIL PROTECTED] wrote:

 RL:In order to completely ground the system, you need to let the system
 build its own symbols



Correct.  Novamente is designed to be able to build its own symbols.

what is built-in, are mechanisms for building symbols, and for
probabilistically
interrelating symbols once created...

ben g



 V. much agree with your whole argument. But -  I may well have missed
 some
 vital posts - I have yet to get the slightest inkling of how you yourself
 propose to do this.


 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65100803-21ddd3

[agi] Human uploading

2007-11-13 Thread Benjamin Goertzel
Richard,

I recently saw a talk by Todd Huffman at the Foresight Unconference on the
topic of
mind uploading technology, and he was specifically showing off techniques
for imaging slices of brain, that *do* give the level of biological detail
you're thinking of.  Topics of discussions were, for example, inferring
synaptic strength indirectly from mitochondrial activity.

So, the Connectome people may not be taking a sufficiently fine-grained
approach to support mind-uploading, but others are trying...

Obviously, a detailed map of the brain at the level Todd is thinking of,
would be of more than peripheral interest to cognitive scientists.  It would
not resolve cognitive questions in itself, but would be a wonderful trove
of data to use to help validate or refute cognitive theories.

-- Ben G



On Nov 13, 2007 10:11 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

 Bryan Bishop wrote:
  On Monday 12 November 2007 22:16, Richard Loosemore wrote:
  If anyone were to throw that quantity of resources at the AGI problem
  (recruiting all of the planet), heck, I could get it done in about 3
  years. ;-)
 
  I have done some research on this topic in the last hour and have found
  that a Connectome Project is in fact in the very early stages out
  there on the internet:
 
  http://iic.harvard.edu/projects/connectome.html
  http://acenetica.blogspot.com/2005/11/human-connectome.html
 
 http://acenetica.blogspot.com/2005/10/mission-to-build-simulated-brain.html
  http://www.indiana.edu/~cortex/connectome_plos.pdfhttp://www.indiana.edu/%7Ecortex/connectome_plos.pdf

 This is the whole brain emulation approach, I guess (my previous
 comments were about evolution of brains rather than neural level
 duplication).

 But (switching topics to whole brain emulation) there are serious
 problems with this.

 It seems quite possible that what we need is a detailed map of every
 synapse, exact layout of dendritic tree structures, detailed knowledge
 of the dynamics of these things (they change rapidly) AND wiring between
 every single neuron.

 When I say it seems possible I mean that the chance of this
 information being absolutely necessary in order to understand what the
 neural system is doing, is so high that we would not want to gamble on
 them NOT being necessary.

 So are the researchers working at that level of detail?

 Egads, no!  Here's a quote from the PLOS Computational Biology paper you
 referenced (above):

 Attempting to assemble the human connectome at the level
 of single neurons is unrealistic and will remain infeasible at
 least in the near future.

 They are not even going to do it at the resolution needed to see
 individual neurons?!

 I think that if they did the whole project at that level of detail it
 would amount to a possibly interesting hint at some of the wiring, of
 peripheral interest to people doing work at the cognitive system level.
  But that is all.

 I think it would be roughly equivalent to the following:  You say to me
 I want to understand how computers work, in enough detail to build my
 own and I reply with I can get a you a photo of a motherboard and a
 500 by 500 pixel image of the inside of an Intel chip...



 Richard Loosemore

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64558273-86797b

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Benjamin Goertzel
 view that the current direction of Novamente
  is -- pick one:  a) a needle in an infinite haystack or b) too fragile
  to succeed -- particularly since I'm pretty sure that you couldn't
  convince me without making some serious additions to Novamente
 
 
  - Original Message -
  *From:* Benjamin Goertzel mailto:[EMAIL PROTECTED]
  *To:* agi@v2.listbox.com mailto:agi@v2.listbox.com
  *Sent:* Monday, November 12, 2007 3:49 PM
  *Subject:* Re: [agi] What best evidence for fast AI?
 
 
  To be honest, Richard, I do wonder whether a sufficiently in-depth
  conversation
  about AGI between us would result in you changing your views about
  the CSP
  problem in a way that would accept the possibility of Novamente-type
  solutions.
 
  But, this conversation as I'm envisioning it would take dozens of
  hours, and would
  require you to first spend 100+ hours studying detailed NM
  materials, so this seems
  unlikely to happen in the near future.
 
  -- Ben
 
  On Nov 12, 2007 3:32 PM, Richard Loosemore [EMAIL PROTECTED]
  mailto:[EMAIL PROTECTED] wrote:
 
  Benjamin Goertzel wrote:
   
Ed --
   
Just a quick comment: Mark actually read a bunch of the
  proprietary,
NDA-required Novamente documents and looked at some source
  code (3 years
ago, so a lot of progress has happened since then).  Richard
  didn't, so
he doesn't have the same basis of knowledge to form detailed
  comments on
NM, that Mark does.
 
  This is true, but not important to my line of argument, since of
  course
  I believe that a problem exists (CSP), which we have discussed
 on a
  number of occasions, and your position is not that you have some
  proprietary, unknown-to-me solution to the problem, but rather
  that you
  do not really think there is a problem.
 
  Richard Loosemore
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?; http://v2.listbox.com/member/?;
 
 
 
 
 
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?; http://v2.listbox.com/member/?;
 
  
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;
  http://v2.listbox.com/member/?;

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64606349-2f1f37

Re: [agi] What best evidence for fast AI?

2007-11-13 Thread Benjamin Goertzel



 For example, what is the equivalent of the activation control (or search)
 algorithm in Google sets.  They operate over huge data.  I bet the
 algorithm for calculating their search or activation is relatively simple
 (much, much, much less than a PhD theses) and look what they can do.  So I
 think one path is to come up with applications that can use and reason with
 large data, having roughly world knowledge-like sparseness, (such as NL
 data) and start with relatively simple activation algorithms and develop
 then from the ground up.



Google, I believe, does reasoning about word and phrase co-occurrence using
a combination of Bayes net learning with EM clustering (this is based on
personal conversations with folks who have worked on related software
there).

The use of EM helps the Bayes net approach scale.

Bayes nets are good for domains like word co-occurence probabilities, in
which the relevant data is relatively static.  They are not much good for
real-time learning.

Unlike Bayes nets, the approach taken in PLN and NARS allows efficient
uncertain reasoning in dynamic environments based on large knowledge bases
(at least in principle, based on the math, algorithms and structures; we
haven't proved it yet).

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64609544-b69ea5

Re: [agi] Human uploading

2007-11-13 Thread Benjamin Goertzel



 Yes, I thought I had heard of people trying more ambitious techniques,
 but in the cases I heard of (can't remember where now) the tradeoffs
 always left the approach hanging on one of the issues:  for example, was
 he talking about scanning microchondrial activity in vivo, in real time,
 across the whole brain?!!  The mind boggles.  [Uh, and it probably
 would, if you were the subject].  Some people think they can do very
 thin slices, but they are in defuncto, not in vivo.



Yes, Todd believes (like most mind uploading experts) that the most
practical
approach to mind uploading in the near term is to slice a dead brain and
scan
it in.  Doing uploading on live brains is bound to be far more
technologically
demanding, so it makes sense to focus on uploading fresh-killed brains
first.





 Couldn't see any good references to this.



It was a talk, not a publication.  Not sure if it was videotaped or not.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64610913-6e5f3d

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Benjamin Goertzel
On Nov 13, 2007 2:37 PM, Richard Loosemore [EMAIL PROTECTED] wrote:


 Ben,

 Unfortunately what you say below is tangential to my point, which is
 what happens when you reach the stage where you cannot allow any more
 vagueness or subjective interpretation of the qualifiers, because you
 have to force the system to do its own grounding, and hence its own
 interpretation.



I don't see why you talk about forcing the system to do its own grounding
--
the probabilities in the system are grounded in the first place, as they
are calculated based on experience.

The system observes, records what it sees, abstracts from it, and chooses
actions that it guess will fulfill its goals.  Its goals are ultimately
grounded in in-built
feeling-evaluation routines, measuring stuff like amount of novelty
observed,
amount of food in system etc.

So, the system sees and then acts ... and the concepts it forms and uses
are created/used based on their utility in deriving appropriate actions.

There is no symbol-grounding problem except in the minds of people who
are trying to interpret what the system does, and get confused.  Any symbol
used within the system, and any probability calculated by the system, are
directly grounded in the system's experience.

There is nothing vague about an observation like Bob_Yifu was observed
at time-stamp 599933322, or a fact Command 'wiggle ear' was sent
at time-stamp 54.  These perceptions and actions are the root of the
probabilities the system calculated, and need no further grounding.



 What you gave below was a sketch of some more elaborate 'qualifier'
 mechanisms.  But I described the process of generating more and more
 elaborate qualifier mechanisms in the body of the essay, and said why
 this process was of no help in resolving the issue.


So, if a system can achieve its goals based on choosing procedures that
it thinks are likely to achieve its goals, based on the knowledge it
gathered
via its perceived experience -- why do you think it has a problem?

I don't really understand your point, I guess.  I thought I did -- I thought
your point was that precisely specifying the nature of a conditional
probability
is a rats-nest of complexity.  And my response was basically that in
Novamente we don't need to do that, because we define conditional
probabilities
based on the system's own knowledge-base, i.e.

Inheritance A B .8

means

If A and B were reasoned about a lot, then A would (as measred by an
weighted
average) have 80% of the relationships that B does

But apparently you were making some other point, which I did not grok,
sorry...

Anyway, though, Novamente does NOT require logical relations of escalating
precision and complexity to carry out reasoning, which is one thing you
seemed
to be assuming in your post.

Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64644318-8bbdee

Re: [agi] What best evidence for fast AI?

2007-11-13 Thread Benjamin Goertzel


 This is the thing that I think is relevent to Robin Hanson's original
 question.  I think we can build 1+2 is short order, and maybe 3 in a
 while longer. But the result of 1+2+3 will almost surely be an
 idiot-savant: knows everything about horses, and can talk about them
 at length, but, like a pedantic lecturer, the droning will put you
 asleep.  So is there more to AGI, and exactly how do way start laying
 hands on that?

 --linas



I think that evolutionary-learning-type methods play a big role in
creativity.

I elaborated on this quite a bit toward the end of my 1997 book From
Complexity to Creativity.

Put simply, inference is ultimately a local search method -- inference
rules, even heuristic and speculative ones, always lead you step by step
from what you know into the unknown.  This makes you, as you say, like
a pedantic lecturer.

OTOH, evolutionary algorithms can take big creative leaps.  This is one
reason why the MOSES evolutionary algorithm plays a big role in the
Novamente design (the other, related reason being that evolutionary learning
is
better than logical inference for many kinds of procedure learning).

Integrating evolution with logic is key to intelligence.  The brain does it,
I believe, via

-- implementing logic via Hebbian learning (neuron-level Hebb stuff leading
to
PLN-like logic stuff on the neural-assembly level)
-- implementing evolution via Edelman-style Neural Darwinist neural map
evolution (which ultimately bottoms out in Hebbian learning too)

Novamente seeks to enable this integration
 via grounding both inference and evolutionary
learning in probability theory.

-- Ben G


-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64667888-a48aa3

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Benjamin Goertzel


 But has a human, asking Wen out on a date, I don't really know what
 Wen likes cats ever really meant. It neither prevents me from talking
 to Wen, or from telling my best buddy that ...well, I know, for
 instance, that she likes cats...


yes, exactly...

The NLP statement Wen likes cats is vague in the same way as the
Novamente or NARS relationship

EvaluationLink
likes
ListLink
   Wen
cats


is vague  The vagueness passes straight from NLP into the internal KR,
which is how it should be.

And that same vagueness may be there if the relationship is learned via
inference based on experience, rather than acquired by natural language.

I.e., if the above relationship is inferred, it may just mean that

 {the relationship between Wen and cats} shares many relationships with
other person/object relationships that have been categorized as 'liking'
before

In this case, the system can figure out that Wen likes cats without ever
actually making explicit what this means.  All it knows is that, whatever it
means,
it's the same thing that was meant in other circumstances where liking
was used as a label.

So, vagueness can not only be important into an AI system from natural
language,
but also propagated around the AI system via inference.

This is NOT one of the trickier things about building probabilistic AGI,
it's really
kind of elementary...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64674694-3ada83

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Benjamin Goertzel



 So, vagueness can not only be important


imported, I meant


 into an AI system from natural language,
 but also propagated around the AI system via inference.

 This is NOT one of the trickier things about building probabilistic AGI,
 it's really
 kind of elementary...

 -- Ben G




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64674943-4b25e0

  1   2   3   4   >