RE: [agi] An AGI Test/Prize

2007-10-22 Thread John G. Rose
Ben,

 

That is sort of a neat kind of device. Will have to think about that as it
is fairly dynamic I may have to look that one up and potentially experiment
on it.

 

The kinds of algebraic structures I'm talking about basically are as many as
possible.  Also things like sets w/o operators, and things not classified as
algebraic but related. I can talk generally about this and then maybe
specifics. The idea is that - let's say you are a developer and are writing
say a web server. How do you go about it? First thing you do is scrounge the
internet for snippets and source code, libraries, specs, etc..  The AGI
I'm talking about is approached the same way cept' you scrounge mathematics
publications generally dealing with abstract algebras. To start off though
as there are hundreds of years of code snippets with proofs BTW but we
start with simple stuff - groups, rings, fields, algebras,  groupoids, etc.
including sub-chunks and twists of these things. Sticking with discrete for
starters except for some continuous here and there.

 

One might ask why do it this way? The idea is that the framework is
elaborate, universal, super powerful construct - basically all abstract math
- defined by man cumulative over time, grounded in rigorous proofs and
absolutes. The goal is to get everything into it meaning all data input is
analyzed for algebraic structure and put into the thing. It's an algebraic
superhighway mesh highly dense -yes you have to emulate it on digital
computers - go from infinite algebraic mesh to physical real digital subset
emulated BUT that's kind of what our brains do. We happen to live in (at
least from day to day perspective) a very finite resource world. I'd like to
delve deeper into digital physics but will not here J

 

So there is a little background. All we are talking about is math and data
and computer. So getting stuff into it? Think about it this way - built in
lossy compression. Yes you have sensory memory duration gradations, example:
photographic to skeletoid, but to get the algebraic structure is where the
AI and stats tools get used. You can imagine how that works - but the goal
is algebraic structure especially operators, magma detection, - imagine
example a dog running look at all the cyclic groups going on - symmetry,
sets, these are signatures, motion operators - subgroups of bodily movement
definitions sampled is behavioral display, then put the dog into memory -
morphisms storage - all dogs ever seen -think of a telescoping morphism tree
index like structure. The AGI internals include morphism and functor
networks kind of like analogy tree nets. Subgroups, subfields, etc. are very
important as you leverage their structure defined onto their instance
representations -

 

Linguistic semantics? Same way. The AI and stats sensory has to break it up
into algebraic structure. You need complexity detection. A view of a
mountain and a view of a page of text have different complexity signatures.
It detects text. The gradation from image to algebraic structure - the
exploded text - sets and operators - processed according to its complexity
sig, rips it apart put into the algebraic text structure mesh memory of
built in telescoping morphism tree (or basically mossy or wormy structures
at this point from a dimensional cross section view). The linguistic text
structure is hierarchies of intersecting subsets and subgroups with morphic
relational trees intersecting with cyclic group and subgroup indexors, etc..
tied into the KB through, once again algebraic structure. Knowledge is very
compressed and cyclic group centric (seems like especially physicl world
knowledge)- it sort of collapses with a self-organizing effect as more data
is added where memories can be peeled off.

 

Anyway, kind of understand where it's headed? 

 

John

 

 

From: Benjamin Goertzel [mailto:[EMAIL PROTECTED] 

John Rose,

As a long-lapsed mathematician, I'm curious about your system, but what
you've said about it so far doesn't really tell me much...

Do you have a mathematical description of your system? 

I did some theoretical work years ago representing complex systems dynamics
in terms of abstract algebras.  What I showed there was that you could
represent a certain kind of multi-component system, with complex
inter-component interactions, in such a way that its dynamic evolution over
time is equivalent to the iteration of a quadratic function in a
high-dimensional space with an eccentric multiplication table on it.  The
multiplication table basically encodes information of the form 

(component i) acts_on (component j) to produce (component k)

where acts_on is the mult. operator  So then complex systems dynamics
all comes down to Julia sets and Mandelbrot sets on high-dimensional real
algebras ;-) 

I never ended up making any use of this direction of thinking, but I found
it interesting...

This stuff made it into my 1997 book From Complexity to Creativity I
believe...

I am curious what 

Re: [agi] An AGI Test/Prize

2007-10-22 Thread J Storrs Hall, PhD
On Monday 22 October 2007 08:05:26 am, Benjamin Goertzel wrote:
 ... but dynamic long-term memory, in my view, is a wildly
 self-organizing mess, and would best be modeled algebraically as a quadratic
 iteration over a high-dimensional real non-division algebra whose
 multiplication table is evolving dynamically as the iteration proceeds

Holy writhing Mandelbrot sets, Batman!

Why real and non-division? I particularly don't like real -- my computer can't 
handle the precision :-)

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56270025-9c1ac7


Re: [agi] An AGI Test/Prize

2007-10-22 Thread Benjamin Goertzel
On 10/22/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

 On Monday 22 October 2007 08:05:26 am, Benjamin Goertzel wrote:
  ... but dynamic long-term memory, in my view, is a wildly
  self-organizing mess, and would best be modeled algebraically as a
 quadratic
  iteration over a high-dimensional real non-division algebra whose
  multiplication table is evolving dynamically as the iteration
 proceeds

 Holy writhing Mandelbrot sets, Batman!

 Why real and non-division? I particularly don't like real -- my computer
 can't
 handle the precision :-)


You need to get the new NVidia AIXI chip ... it's a bargain at $infinity.99
;-)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56281147-0ed02b

RE: [agi] An AGI Test/Prize

2007-10-22 Thread John G. Rose
 Holy writhing Mandelbrot sets, Batman!
 
 Why real and non-division? I particularly don't like real -- my computer
 can't
 handle the precision :-)

Robin - forget all this digital stuff it's a trap, we need some analog 
nano-computers to help fight these crispy impostors!

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56310633-8760ea

Re: [agi] An AGI Test/Prize

2007-10-22 Thread Richard Loosemore

Benjamin Goertzel wrote:



On 10/22/07, *J Storrs Hall, PhD* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


On Monday 22 October 2007 08:05:26 am, Benjamin Goertzel wrote:
  ... but dynamic long-term memory, in my view, is a wildly
  self-organizing mess, and would best be modeled algebraically as
a quadratic
  iteration over a high-dimensional real non-division algebra whose
  multiplication table is evolving dynamically as the iteration
proceeds

Holy writhing Mandelbrot sets, Batman!

Why real and non-division? I particularly don't like real -- my
computer can't
handle the precision :-)


You need to get the new NVidia AIXI chip ... it's a bargain at 
$infinity.99  ;-)


Oh, it's not the price for the NVidia AIXI chip that bothers me, its the 
delivery:  Amazon say that orders will be shipped when Hell reaches Zero 
Degrees Kelvin.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56326627-3df523


RE: [agi] An AGI Test/Prize

2007-10-22 Thread John G. Rose
Yeah I'm not really agreeing with you here. I feel that, though I haven't
really studied other cognitive software structures, but I feel that they can
built simpler and more efficient. But I shouldn't come out saying that
unless I attack some of the details right? But that's a gut reaction I have
after working on so many large software projects.  And it does depend on the
view of cognition. Some of cognition is just hype it depends on what you are
trying to build. There are a lot of warm-fuzzies, Dr. Feelgood things going
on with cognition. I like cognition as a machine, a systematic controlled
complexity modeler, edge of chaos surfing, crystallographic, polytopical
harmonic, probabilistic sort of morphism and structure pump, with SOM
injection - yeah I want a machine that rips through the fabric of reality
mesh.

 

John

 

 

From: Benjamin Goertzel [mailto:[EMAIL PROTECTED] 




Well the problem is that branches of algebra like universal algebra and
category theory, that don't assume highly particular algebraic rules, don't
really have any deep theorems that tell you anything...

Whereas the branches of algebra that really give you deep information, all
pertain to highly specialized structures that are very unlikely to be
relevant to cognition...




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56339571-d001db

RE: [agi] An AGI Test/Prize

2007-10-22 Thread John G. Rose
Vladimir,

 

I'm using system as kind of a general word for a set and operator(s).

 

You are understanding it correctly except templates is not right. The
templates are actually a vast internal complex of structure which includes
morphisms which are like templates.

 

But you are right it does seem like a categorization approach. When you say
categorization approach can you point out an example of that that I can look
into?

 

John

 

 

From: Vladimir Nesov [mailto:[EMAIL PROTECTED] 



John,

What do you mean by system? You imply that these objects have a structure,
or equivalently are abstract models of original input. So, you take original
input in whatever form it's coming in and based on it you create instances
of abstract structures according to templates that are known to system. Is
it essentially correct? If so, it's very similar to categorization approach:
you observe experience indirectly, through categorization structure that
current perception system produces for it. 

If you need to model a boolean based space for some sort of sampled data
world it sees and correlates to that, the thing would generate a boolean
algebra modeled and represented onto that informational structure for that
particular space instance being studied. For example electronics theory -
it would need to model that world as an instance based on electronics
descriptor items and operators in that particular world or space set.
Electronics theory world could be spat out as something very minor that it
understands.


So, it would assembled a description 'in place' from local rules, based on
information provided by specific experience. Is it a correct restatement?

 

Not sure if my terminology is very standard but do you understand the
thinking? It may very well be morphic to other AGI structures or theories I
don't know but I kind of like the way it works represented as such because
it seems simple and not messy but very comprehensive and has other good
qualities.


It's very vague, but can with a stretch of imagination be mapped to many
other views. It's unclear with this level of detail.
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56546239-4cc4b3

RE: [agi] An AGI Test/Prize

2007-10-21 Thread Edward W. Porter
Your busy and I'm busy, so we can wait for another topic before
communicating next.  But our communication on this topic has been
interesting.

Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 20, 2007 10:58 PM
To: agi@v2.listbox.com
Subject: RE: [agi] An AGI Test/Prize



Edward,



Oops missed that - CA (cellular automata) is something that some other
people on the list could really enlighten you on as it gets really deep
and elaborate. CA for me is a potential toolset for some basic programming
and logic constructs… so far.



Yes all those goodies get modeled. I suppose I need to elaborate – hey
wait a sec how did I become the theorist on all this crap? heh



John





From: Edward W. Porter [mailto:[EMAIL PROTECTED]
Subject: RE: [agi] An AGI Test/Prize



So, do you or don't you model uncertainty, contradictory evidence, degree
of similarity, and all those good things?



And what is a CA, or don't i want to know?





  _

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
http://v2.listbox.com/member/?;
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56054006-4da6e4

Re: [agi] An AGI Test/Prize

2007-10-21 Thread Vladimir Nesov
On 10/21/07, John G. Rose [EMAIL PROTECTED] wrote:

  Vladimir,



 That may very well be the case and something that I'm unaware of. The
 system I have in mind basically has I/O that is algebraic structures.
 Everything that it deals with is modeled this way. Any sort of system that
 it analyzes it converts to a particular structure that represents the data.
 All of its internal mechanisms are mathematically abstracted out – except
 for ancillary hard coded out of band assistors, AI, statistics, database,
 etc. The idea is to have a system that can understand systems and generate
 systems specifically.

John,

What do you mean by system? You imply that these objects have a structure,
or equivalently are abstract models of original input. So, you take original
input in whatever form it's coming in and based on it you create instances
of abstract structures according to templates that are known to system. Is
it essentially correct? If so, it's very similar to categorization approach:
you observe experience indirectly, through categorization structure that
current perception system produces for it.

If you need to model a boolean based space for some sort of sampled data
 world it sees and correlates to that, the thing would generate a boolean
 algebra modeled and represented onto that informational structure for that
 particular space instance being studied. For example electronics theory –
 it would need to model that world as an instance based on electronics
 descriptor items and operators in that particular world or space set.
 Electronics theory world could be spat out as something very minor that it
 understands.


So, it would assembled a description 'in place' from local rules, based on
information provided by specific experience. Is it a correct restatement?

Not sure if my terminology is very standard but do you understand the
 thinking? It may very well be morphic to other AGI structures or theories I
 don't know but I kind of like the way it works represented as such because
 it seems simple and not messy but very comprehensive and has other good
 qualities.


It's very vague, but can with a stretch of imagination be mapped to many
other views. It's unclear with this level of detail.


-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56066281-5a9cbe

Re: [agi] An AGI Test/Prize

2007-10-20 Thread Gabriel Recchia
Has anyone come across (or written) any papers that argue for particular
low-level capabilities that any system capable of human-level intelligence
must possess, and which posits particular tests for assessing whether a
system possesses these prerequisites for intelligence?  I'm looking for
anything like this, or indeed anything that tries to lay out an incremental
path toward AGI with testable benchmarks along the way.  I'd be very
appreciative if anyone could point me to any such work.

Gabe


On 10/19/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:



 
  I largely agree. It's worth pointing out that Carnot published
  Reflections on
  the Motive Power of Fire and established the science of thermodynamics
  more
  than a century after the first working steam engines were built.
 
  That said, I opine that an intuitive grasp of some of the important
  elements
  in what will ultimately become the science of intelligence is likely to
  be
  very useful to those inventing AGI.
 


 Yeah, most certainly  However, an intuitive grasp -- and even a
 well-fleshed-out
 qualitative theory supplemented by heuristic back-of-the-envelope
 calculations
 and prototype results -- is very different from a defensible, rigorous
 theory that
 can stand up to the assaults of intelligent detractors

 I didn't start seriously trying to design  implement AGI until I felt I
 had a solid
 intuitive grasp of all related issues.  But I did make a conscious choice
 to devote
 more effort to utilizing my intuitive grasp to try to design and create
 AGI,
 rather than to creating better general AI theories  Both are worthy
 pursuits,
 and both are difficult.  I actually enjoy theory better.  But my sense is
 that the
 heyday of AGI theorizing is gonna come after AGI experimentation has
 progressed
 a good bit further than it has today...

 -- Ben G

 --
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55733424-f2512b

Re: [agi] An AGI Test/Prize

2007-10-20 Thread David McFadzean
On 10/19/07, Matt Mahoney [EMAIL PROTECTED] wrote:

 http://www.vetta.org/documents/ui_benelearn.pdf

 Unfortunately the test is not computable.

True but how about testing intelligence by comparing the performance
of an agent across several computable environments (randomly-generated
finite games) to the performance of a random agent? I suspect this
measure would provide a reasonable estimate of Hutter's definition and
could be made arbitrarily more accurate by increasing the number and
complexity of test environments.

David

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55740663-05e954


RE: [agi] An AGI Test/Prize

2007-10-20 Thread Edward W. Porter
I guess I am mundane.  I don’t spend a lot of time thinking about a
“definition of intelligence.”  Goertzel’s is good enough for me.



Instead I think in  terms of what I want these machines to do -- which
includes human-level:



-NL understanding and generation (including discourse level)

-Speech recognition and generation (including appropriate pitch and volume
modulation)

-Non-speech auditory recognition and generation

-Visual recognition and real time video generation

-World-knowledge representation, understanding and reasoning

-Computer program understanding and generation

-Common sense reasoning

-Cognition

-Context sensitivity

-Automatic learning

-Intuition

-Creativity

-Inventiveness

-Understanding human nature and human desires and goals(not expecting full
human-level here)

-Ability to scan and store and, over time, convert and incorporate into
learned deep structure vast amounts of knowledge including ultimately all
available recorded knowledge





To do such thinking I have come up with a fairly uniform approach to all
these tasks, so I guess you could call that approach something approaching
a theory of intelligence.  But I mainly think of it as a theory of how
to get certain really cool things done.



I don’t expect to get what is listed all at once, but, barring some major
set back, this will probably all happen (with perhaps partial exception on
the last item) within twenty years, and with the right people getting big
money most of it could substantially all happen in ten.



In addition, as we get closer to the threshold I think “intelligence” (at
least from our perspective) should include:



-helping make individual people, human organizations, and human government
more intelligent, happy, cooperative, and peaceful

-helping creating a transition into the future that is satisfying for most
humans


Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]

-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 20, 2007 1:27 PM
To: agi@v2.listbox.com
Subject: RE: [agi] An AGI Test/Prize



Interesting background about on some thermodynamics history J.



But basic definitions of intelligence, not talking about reinventing
particle physics here, a basic, workable definition, not rigorous
mathematical proof just something simple. AI, AGI c’mon not asking for
tooo much. In my mind it is not looking that sophisticated at the atomic
level and it seems like it is VERY applicable for implementation if not
required for testing. Though Hutter and Legg are apparently working
diligently on this stuff and have a lot papers.



John





I largely agree. It's worth pointing out that Carnot published
Reflections on
the Motive Power of Fire and established the science of thermodynamics
more
than a century after the first working steam engines were built.

That said, I opine that an intuitive grasp of some of the important
elements
in what will ultimately become the science of intelligence is likely to be
very useful to those inventing AGI.



Yeah, most certainly  However, an intuitive grasp -- and even a
well-fleshed-out
qualitative theory supplemented by heuristic back-of-the-envelope
calculations
and prototype results -- is very different from a defensible, rigorous
theory that
can stand up to the assaults of intelligent detractors

I didn't start seriously trying to design  implement AGI until I felt I
had a solid
intuitive grasp of all related issues.  But I did make a conscious choice
to devote
more effort to utilizing my intuitive grasp to try to design and create
AGI,
rather than to creating better general AI theories  Both are worthy
pursuits,
and both are difficult.  I actually enjoy theory better.  But my sense is
that the
heyday of AGI theorizing is gonna come after AGI experimentation has
progressed
a good bit further than it has today...




  _

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
http://v2.listbox.com/member/?;
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55773358-059800

Re: [agi] An AGI Test/Prize

2007-10-20 Thread Robert Wensman
Regarding testing grounds for AGI. Personally I feel that ordinary computer
games could provide an excellent proving ground for the early stages of AGI,
or maybe even better if they are especially constructed. Computer games are
usually especially designed to encourage the player towards creativity and
exploration. Take a simple platform game for example, at every new stage new
graphics and monsters are introduced, and in large, the player undergoes a
continuous self training that last throughout the whole game. Game
developers carefully distribute rewards and challenges to make this learning
process as smooth as possible.

But also I would like to say that given any proving ground for the first
stages of AGI could be misused if AGI designers bring specialized code into
their system. So if there is to be a competition for first generation AGI,
there would have to be some referee that evaluates how much domain specific
knowledge has been encoded to any given system.

For the late development stages of AGI, where we basically have virtual
human minds, then we could use so hard problems that specialized code could
not help the AGI system anymore. But I guess that at that time we have
basically already solved the problem of AGI, and competitions where AGI
systems compete in writing essays on some subject, could only be used to
polish some already outlined solution to AGI.

I am a fan of Novamente, but for example when I watched the movie where they
trained an AGI dog, I was left with the question about what parts of its
cognition was specialization. For example, the human teacher used natural
language to talk to the dog. Did the dog understand any of it, and in that
case, was there any special language module involved? Also, training a dog
is quite open ended, and it is difficult to assess what is progress. This
shows just how difficult it is to demonstrate AGI. Any demonstration of AGI
would have to support a list of what cognitive aspects are coded, and which
are learnt. Only then you can understand whether it is impressive or not.

Also, because we need to have firm rules about what can be pre-programmed,
and what needs to be learnt, it is easier if we used some world with pretty
simple mechanics. What I basically would like to see is an AGI learning to
play a certain computer game, starting by learning the fundamentals, and
then playing it to the end. Take an old videogame classic like The Legend of
Zelda. http://www.zelda.com/universe/game/zelda/. I know a lot of you would
say that this is a far to simplistic world for training an AGI, but not if
you prohibit ANY pre-programmed knowledge. You only allow the AGI system to
start with proto-knowledge representation, and basically hard-wire the
in-game rewards and punishemnts to the goal of the AGI. The AGI system would
then have to learn basic concepts such as:

objects moving around on the screen
which graphics correspond to yourself
walls where you can go
keys that opens doors
the concept of coming to a new screen when walking of the edge of one
how screens relate to each other
teleportation (the flute for anyone who remembers)

If the AGI system then can learn to play the game to the end and slay
Ganon based on only proto-knowledge, then maybe we have some interesting
going on. Such an AGI could maybe be compared to a rodent running in a maze,
even if the motoric and vision system are more complicated. Then we are
ready to increase the complexity of the computer game, adding communication
with other characters, more complex concepts and puzzles, more dimensions,
more motorics etc..

Basically, I would like to se Novamente and similar AGI systems play some
goal oriented computer game, since AGI in itself needs to be goal oriented.

/R



2007/10/20, Benjamin Goertzel [EMAIL PROTECTED]:



 
  I largely agree. It's worth pointing out that Carnot published
  Reflections on
  the Motive Power of Fire and established the science of thermodynamics
  more
  than a century after the first working steam engines were built.
 
  That said, I opine that an intuitive grasp of some of the important
  elements
  in what will ultimately become the science of intelligence is likely to
  be
  very useful to those inventing AGI.
 


 Yeah, most certainly  However, an intuitive grasp -- and even a
 well-fleshed-out
 qualitative theory supplemented by heuristic back-of-the-envelope
 calculations
 and prototype results -- is very different from a defensible, rigorous
 theory that
 can stand up to the assaults of intelligent detractors

 I didn't start seriously trying to design  implement AGI until I felt I
 had a solid
 intuitive grasp of all related issues.  But I did make a conscious choice
 to devote
 more effort to utilizing my intuitive grasp to try to design and create
 AGI,
 rather than to creating better general AI theories  Both are worthy
 pursuits,
 and both are difficult.  I actually enjoy theory better.  But my sense is
 that the
 heyday of AGI 

RE: [agi] An AGI Test/Prize

2007-10-20 Thread John G. Rose
No you are not mundane. All these things on the list (or most) are very well
to be expected from a generally intelligent system or its derivatives. But I
have this urge, being a software developer, to smash all these things up
into their constituent components, partition commonalties, eliminate dupes,
and perhaps further smash up into an atomic representation of intelligence
as little intelligent engines that can be combined in various ways to build
higher level functions. Kind of like a cellular automata approach and
perhaps CA structures can be used. I really don't want to waste 10 years
developing a giant piece of bloatage code that never fully works. Better to
exhaust all possibilities in the mind and on paper as much as possible as
software dev can be a giant PIA mess if not thought out beforehand as much
as possible. Yes you can go so far before doing prototyping and testing but
certain prototypes can take many months to build.

 

Several on this email list have already gotten to this point and it may be
more productive digesting their systems instead of reinventing.  Even so
that leaves many questions open about testing. Someone can claim they have
AGI but how do you really know, could be just a highly sophisticated
chatterbot.

 

 John

 

 

From: Edward W. Porter [mailto:[EMAIL PROTECTED] 



I guess I am mundane.  I don't spend a lot of time thinking about a
definition of intelligence.  Goertzel's is good enough for me.  

 

Instead I think in  terms of what I want these machines to do -- which
includes human-level:

 

-NL understanding and generation (including discourse level)

-Speech recognition and generation (including appropriate pitch and volume
modulation)

-Non-speech auditory recognition and generation

-Visual recognition and real time video generation

-World-knowledge representation, understanding and reasoning

-Computer program understanding and generation

-Common sense reasoning

-Cognition

-Context sensitivity

-Automatic learning

-Intuition

-Creativity

-Inventiveness

-Understanding human nature and human desires and goals(not expecting full
human-level here)

-Ability to scan and store and, over time, convert and incorporate into
learned deep structure vast amounts of knowledge including ultimately all
available recorded knowledge

.

 

To do such thinking I have come up with a fairly uniform approach to all
these tasks, so I guess you could call that approach something approaching
a theory of intelligence.  But I mainly think of it as a theory of how to
get certain really cool things done.

 

I don't expect to get what is listed all at once, but, barring some major
set back, this will probably all happen (with perhaps partial exception on
the last item) within twenty years, and with the right people getting big
money most of it could substantially all happen in ten.

 

In addition, as we get closer to the threshold I think intelligence (at
least from our perspective) should include:

 

-helping make individual people, human organizations, and human government
more intelligent, happy, cooperative, and peaceful

-helping creating a transition into the future that is satisfying for most
humans

 

Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55777112-33cf1e

RE: [agi] An AGI Test/Prize

2007-10-20 Thread John G. Rose
Well I'm neck deep in 55,000 semi-colons of code in this AI app I'm building
and need to get this bastich out the do' and it's probably going to grow to
80,000 before version 1.0. But at some point it needs to grow a brain. Yes I
have my AGI design in mind since late 90's and had been watching what would
happen with Intelligenesis. I particularly think along the lines of an
abstract algebra based engine basically an algebraic structure pump.
Everything is sets and operators with probability glue, a lot of SOM and AI
sensory. But recent ideas in category theory are molding it and CA's are
always rearing their tiny little heads.. 

 

But spending time digesting Novamente theory and then general AGI structure
that's valuable. Things like - graph storage indexing methodology - really
need to spend some time on that especially learning from the people with
experience. 

 

John

 

 

From: Benjamin Goertzel [mailto:[EMAIL PROTECTED] 

On 10/20/07, Edward W. Porter [EMAIL PROTECTED] wrote: 

John, 

 

So rather than a definition of intelligence you want a recipe for how to
make a one?

 

Goertzel's descriptions of Novamente in his two recent books are the
closest, publicly-available approximation of that of which I currently know.

 


Actually my book on how to build an AGI are not publicly available at this
point ... but I'm strongly leaning toward making them so ... it's mostly
just a matter of finding time to proofread them, remove obsolete ideas, etc.
and generally turn them from draft manuscripts into finalized manuscripts.
I have already let a bunch of people read the drafts... 

Of course, a problem with putting material like this in dead-tree form is
that the ideas are evolving.  We learn new stuff as we proceed through
implementing the stuff in the books  But the basic framework (knowledge
rep, algorithms, cognitive architecture, teaching methodology) has not
changed as we've proceed through the work so far, just some of the details
(wherein the devil famously lies ;-) 




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55787440-e8ac33

Re: [agi] An AGI Test/Prize

2007-10-20 Thread Benjamin Goertzel
Ah, gotcha...

The recent book Advances in Artificial General Intelligence gives a bunch
more detail than those, actually (though not as much of the conceptual
motivations as The Hidden Pattern) ... but not nearly as much as the
not-yet-released stuff...

-- Ben

On 10/20/07, Edward W. Porter [EMAIL PROTECTED] wrote:

  Ben,

 The books I was referring to were The Hidden Pattern and Artificial
 General Intelligence, both of which I purchased from Amazon.  I know you
 have a better description, but what is in these two books is quite helpful.

 Ed Porter

  -Original Message-
 *From:* Benjamin Goertzel [mailto:[EMAIL PROTECTED]
 *Sent:* Saturday, October 20, 2007 4:01 PM
 *To:* agi@v2.listbox.com
 *Subject:* Re: [agi] An AGI Test/Prize


 On 10/20/07, Edward W. Porter [EMAIL PROTECTED] wrote:

 John,



 So rather than a definition of intelligence you want a recipe for how to
 make a one?



 Goertzel's descriptions of Novamente in his two recent books are the
 closest, publicly-available approximation of that of which I currently
 know.


 Actually my book on how to build an AGI are not publicly available at this
 point ... but I'm strongly leaning toward making them so ... it's mostly
 just a matter of finding time to proofread them, remove obsolete ideas, etc.
 and generally turn them from draft manuscripts into finalized manuscripts.
 I have already let a bunch of people read the drafts...

 Of course, a problem with putting material like this in dead-tree form is
 that the ideas are evolving.  We learn new stuff as we proceed through
 implementing the stuff in the books  But the basic framework (knowledge
 rep, algorithms, cognitive architecture, teaching methodology) has not
 changed as we've proceed through the work so far, just some of the details
 (wherein the devil famously lies ;-)

 -- Ben


  Edward W. Porter
  Porter  Associates
  24 String Bridge S12
  Exeter, NH 03833
  (617) 494-1722
  Fax (617) 494-1822
  [EMAIL PROTECTED]
 
   -Original Message-
  *From:* John G. Rose [mailto:[EMAIL PROTECTED]
  *Sent:* Saturday, October 20, 2007 3:16 PM
  *To:* agi@v2.listbox.com
  *Subject:* RE: [agi] An AGI Test/Prize
 
   No you are not mundane. All these things on the list (or most) are very
  well to be expected from a generally intelligent system or its derivatives.
  But I have this urge, being a software developer, to smash all these things
  up into their constituent components, partition commonalties, eliminate
  dupes, and perhaps further smash up into an atomic representation of
  intelligence as little intelligent engines that can be combined in various
  ways to build higher level functions. Kind of like a cellular automata
  approach and perhaps CA structures can be used. I really don't want to waste
  10 years developing a giant piece of bloatage code that never fully works.
  Better to exhaust all possibilities in the mind and on paper as much as
  possible as software dev can be a giant PIA mess if not thought out
  beforehand as much as possible. Yes you can go so far before doing
  prototyping and testing but certain prototypes can take many months to
  build.
 
 
 
  Several on this email list have already gotten to this point and it may
  be more productive digesting their systems instead of reinventing…  Even so
  that leaves many questions open about testing. Someone can claim they have
  AGI but how do you really know, could be just a highly sophisticated
  chatterbot.
 
 
 
   John
 
 
 
 
 
  *From:* Edward W. Porter [mailto:[EMAIL PROTECTED]
 
   I guess I am mundane.  I don't spend a lot of time thinking about a
  definition of intelligence.  Goertzel's is good enough for me.
 
 
 
  Instead I think in  terms of what I want these machines to do -- which
  includes human-level:
 
 
 
  -NL understanding and generation (including discourse level)
 
  -Speech recognition and generation (including appropriate pitch and
  volume modulation)
 
  -Non-speech auditory recognition and generation
 
  -Visual recognition and real time video generation
 
  -World-knowledge representation, understanding and reasoning
 
  -Computer program understanding and generation
 
  -Common sense reasoning
 
  -Cognition
 
  -Context sensitivity
 
  -Automatic learning
 
  -Intuition
 
  -Creativity
 
  -Inventiveness
 
  -Understanding human nature and human desires and goals(not expecting
  full human-level here)
 
  -Ability to scan and store and, over time, convert and incorporate into
  learned deep structure vast amounts of knowledge including ultimately all
  available recorded knowledge
 
  .
 
 
 
  To do such thinking I have come up with a fairly uniform approach to all
  these tasks, so I guess you could call that approach something
  approaching a theory of intelligence.  But I mainly think of it as a
  theory of how to get certain really cool things done.
 
 
 
  I don't expect to get what is listed all at once, but, barring some
  major set back, this will probably

RE: [agi] An AGI Test/Prize

2007-10-20 Thread Edward W. Porter
John,



“[A]bstract algebra based engine” that’s “basically an algebraic structure
pump” sounds really exotic.  I’m visualizing a robo-version of my ninth
grade algebra teacher on speed.



If its not giving away the crown jewels, what in the hell is it and how
does it fit into to AGI?



And what are the always rearing “CA”s, you know, the ones with the tiny
little heads?



 Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 20, 2007 4:44 PM
To: agi@v2.listbox.com
Subject: RE: [agi] An AGI Test/Prize



Well I’m neck deep in 55,000 semi-colons of code in this AI app I’m
building and need to get this bastich out the do’ and it’s probably going
to grow to 80,000 before version 1.0. But at some point it needs to grow a
brain. Yes I have my AGI design in mind since late 90’s and had been
watching what would happen with Intelligenesis. I particularly think along
the lines of an abstract algebra based engine basically an algebraic
structure pump. Everything is sets and operators with probability glue, a
lot of SOM and AI sensory. But recent ideas in category theory are molding
it and CA’s are always rearing their tiny little heads….



But spending time digesting Novamente theory and then general AGI
structure that’s valuable. Things like - graph storage indexing
methodology - really need to spend some time on that especially learning
from the people with experience.



John





From: Benjamin Goertzel [mailto:[EMAIL PROTECTED]

On 10/20/07, Edward W. Porter [EMAIL PROTECTED] wrote:

John,



So rather than a definition of intelligence you want a recipe for how to
make a one?



Goertzel's descriptions of Novamente in his two recent books are the
closest, publicly-available approximation of that of which I currently
know.




Actually my book on how to build an AGI are not publicly available at this
point ... but I'm strongly leaning toward making them so ... it's mostly
just a matter of finding time to proofread them, remove obsolete ideas,
etc. and generally turn them from draft manuscripts into finalized
manuscripts.  I have already let a bunch of people read the drafts...

Of course, a problem with putting material like this in dead-tree form is
that the ideas are evolving.  We learn new stuff as we proceed through
implementing the stuff in the books  But the basic framework
(knowledge rep, algorithms, cognitive architecture, teaching methodology)
has not changed as we've proceed through the work so far, just some of the
details (wherein the devil famously lies ;-)




  _

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
http://v2.listbox.com/member/?;
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55814342-eb1811

Re: [agi] An AGI Test/Prize

2007-10-20 Thread Benjamin Goertzel
On 10/20/07, Edward W. Porter [EMAIL PROTECTED] wrote: You mean I wasted my
time and money by buying and reading the Novamente article in Artificial
General Intelligence when I could have bought the new and improved
Advances in Artificial General Intelligence.  What a rip off!

Ed

(((

Bummer, eh?  ;-)

Seriously though: The articles on NM in the newer AGI edited volume don't
review the overall NM architecture and design as thoroughly as the article
on NM in the older AGI edited volume.   We tried not to be redundant in
writing the NM articles for the new volume.  However, the articles in the
new volume do go into more detail on various specific aspects of the NM
system.

One problem with the original (older) Artificial General Intelligence book
is that the articles in it were actually written in 2002, but the book did
not appear until 2006!  This was because of various delays associated with
the publishing process, which fortunately were not repeated with the newer
volume...

The good news is, the articles on NM in the newer AGI edited volume are
available online at the AGIRI.org website, on the page devoted to the 2006
AGIRI workshop...

http://www.agiri.org/forum/index.php?act=STf=21t=23

-- Ben


 -Original Message-
 *From:* Benjamin Goertzel [mailto:[EMAIL PROTECTED]
 *Sent:* Saturday, October 20, 2007 5:24 PM
 *To:* agi@v2.listbox.com
 *Subject:* Re: [agi] An AGI Test/Prize


 Ah, gotcha...

 The recent book Advances in Artificial General Intelligence gives a
 bunch more detail than those, actually (though not as much of the conceptual
 motivations as The Hidden Pattern) ... but not nearly as much as the
 not-yet-released stuff...

 -- Ben

 On 10/20/07, Edward W. Porter [EMAIL PROTECTED] wrote:
 
   Ben,
 
  The books I was referring to were The Hidden Pattern and Artificial
  General Intelligence, both of which I purchased from Amazon.  I know you
  have a better description, but what is in these two books is quite helpful.
 
  Ed Porter
 
   -Original Message-
  *From:* Benjamin Goertzel [mailto:[EMAIL PROTECTED]
  *Sent:* Saturday, October 20, 2007 4:01 PM
  *To:* agi@v2.listbox.com
  *Subject:* Re: [agi] An AGI Test/Prize
 
 
  On 10/20/07, Edward W. Porter  [EMAIL PROTECTED] wrote:
 
  John,
 
 
 
  So rather than a definition of intelligence you want a recipe for how to
  make a one?
 
 
 
  Goertzel's descriptions of Novamente in his two recent books are the
  closest, publicly-available approximation of that of which I currently
  know.
 
 
  Actually my book on how to build an AGI are not publicly available at
  this point ... but I'm strongly leaning toward making them so ... it's
  mostly just a matter of finding time to proofread them, remove obsolete
  ideas, etc. and generally turn them from draft manuscripts into finalized
  manuscripts.  I have already let a bunch of people read the drafts...
 
  Of course, a problem with putting material like this in dead-tree form
  is that the ideas are evolving.  We learn new stuff as we proceed through
  implementing the stuff in the books  But the basic framework (knowledge
  rep, algorithms, cognitive architecture, teaching methodology) has not
  changed as we've proceed through the work so far, just some of the details
  (wherein the devil famously lies ;-)
 
  -- Ben
 
 
Edward W. Porter
   Porter  Associates
   24 String Bridge S12
   Exeter, NH 03833
   (617) 494-1722
   Fax (617) 494-1822
   [EMAIL PROTECTED]
  
-Original Message-
   *From:* John G. Rose [mailto: [EMAIL PROTECTED]
   *Sent:* Saturday, October 20, 2007 3:16 PM
   *To:* agi@v2.listbox.com
   *Subject:* RE: [agi] An AGI Test/Prize
  
No you are not mundane. All these things on the list (or most) are
   very well to be expected from a generally intelligent system or its
   derivatives. But I have this urge, being a software developer, to smash 
   all
   these things up into their constituent components, partition commonalties,
   eliminate dupes, and perhaps further smash up into an atomic 
   representation
   of intelligence as little intelligent engines that can be combined in
   various ways to build higher level functions. Kind of like a cellular
   automata approach and perhaps CA structures can be used. I really don't 
   want
   to waste 10 years developing a giant piece of bloatage code that never 
   fully
   works. Better to exhaust all possibilities in the mind and on paper as 
   much
   as possible as software dev can be a giant PIA mess if not thought out
   beforehand as much as possible. Yes you can go so far before doing
   prototyping and testing but certain prototypes can take many months to
   build.
  
  
  
   Several on this email list have already gotten to this point and it
   may be more productive digesting their systems instead of reinventing…  
   Even
   so that leaves many questions open about testing. Someone can claim they
   have AGI but how do you really know, could be just a highly sophisticated

RE: [agi] An AGI Test/Prize

2007-10-20 Thread John G. Rose
http://en.wikipedia.org/wiki/Algebraic_structure

 

http://en.wikipedia.org/wiki/Cellular_automata

 

Start reading..

 

John

 

 

From: Edward W. Porter [mailto:[EMAIL PROTECTED] 



 

John,

 

[A]bstract algebra based engine that's basically an algebraic structure
pump sounds really exotic.  I'm visualizing a robo-version of my ninth
grade algebra teacher on speed.  

 

If its not giving away the crown jewels, what in the hell is it and how does
it fit into to AGI?

 

And what are the always rearing CAs, you know, the ones with the tiny
little heads?

 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55818972-3fd4c5

Re: [agi] An AGI Test/Prize

2007-10-20 Thread Vladimir Nesov
On 10/21/07, John G. Rose [EMAIL PROTECTED] wrote:

  http://en.wikipedia.org/wiki/Algebraic_structure



 http://en.wikipedia.org/wiki/Cellular_automata



 Start reading….


John,

It doesn't really help in understanding how system described by such terms
is related to implementation of AGI. It sounds pretty much like I use a
Turing Machine, but with more exotic equivalent. If you could be more
specific, it'd be interesting to have at least a rough picture of what your
approach is about.


-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55822610-45b851

RE: [agi] An AGI Test/Prize

2007-10-20 Thread John G. Rose
Vladimir,

 

That may very well be the case and something that I'm unaware of. The system
I have in mind basically has I/O that is algebraic structures. Everything
that it deals with is modeled this way. Any sort of system that it analyzes
it converts to a particular structure that represents the data. All of its
internal mechanisms are mathematically abstracted out - except for ancillary
hard coded out of band assistors, AI, statistics, database, etc. The idea
is to have a system that can understand systems and generate systems
specifically. If you need to model a boolean based space for some sort of
sampled data world it sees and correlates to that, the thing would
generate a boolean algebra modeled and represented onto that informational
structure for that particular space instance being studied. For example
electronics theory - it would need to model that world as an instance
based on electronics descriptor items and operators in that particular world
or space set. Electronics theory world could be spat out as something
very minor that it understands.

 

Not sure if my terminology is very standard but do you understand the
thinking? It may very well be morphic to other AGI structures or theories I
don't know but I kind of like the way it works represented as such because
it seems simple and not messy but very comprehensive and has other good
qualities.

 

John 

 

 

From: Vladimir Nesov [mailto:[EMAIL PROTECTED] 



John,

It doesn't really help in understanding how system described by such terms
is related to implementation of AGI. It sounds pretty much like I use a
Turing Machine, but with more exotic equivalent. If you could be more
specific, it'd be interesting to have at least a rough picture of what your
approach is about. 




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55824739-ba7a29

RE: [agi] An AGI Test/Prize

2007-10-20 Thread John G. Rose
Hi Edward,

 

I don't see any problems dealing with either discrete or continuous. In fact
in some ways it'd be nice to eliminate discrete and just operate in
continuous mode. But discrete maps very well with binary computers.
Continuous is just a lot of discrete, the density depending on resources or
defined as ranges in sets, other descriptors, etc. different ways.

 

I'm not really well versed on NARS and Novamente so can't comment on them
and they are light years down the road. They are basically in implementation
stage, closer to realized utility, more than just theories.

 

Oh those 55(80),000 lines of code are an AI product I am making so it is not
AGI but the thing has basically stubs for AGI or could be used by AGI.

 

But the methodology I am talking about seems to be very well workable with
data from the real world. It's hard for me to find things that it doesn't
work with although real tests need to be performed. BTW this type of
thinking I'm sure is well analyzed by many abstract algebra mathematicians.
Computability issues exist and these may make the theory not workable to a
certain degree. I actually don't know enough about a lot of this math to
really work it through deeply for a feasibility study (yet) and much of it
is still up in the air. 

 

John

 

 

 

What I found interesting is that, described at this very general level, what
this is saying is actually related to my view of AGI, except that it appears
to be based on a totally crisp, 1 or 0 view of the world.  If that is
correct, it may be very valuable in certain domains, with are themselves
totally or almost totally crisp, but it won't work for most human-like
thinking, because most human concepts and what they describe in the real
world are not crisp.

 

THAT IS, UNLESS, YOU PLAN TO MODEL CONCEPTUAL FLUIDITY, ITSELF, IN A TOTALLY
CRISP, UNCERTAINTY-BASED, WAY, which is obviously doable at some level.  I
guess that is what you are referring to by saying our mind does crisp
thinking all the time.  Even most of us anti-crispies, plan to implement our
fluid system on digital machinery using binary representation, which we hope
will be crisp (but at the 22nm node it might be a little less than totally
crisp.)

 

But the issue is: do your crisp techniques efficiently learn and represent
the fluidity of mental concepts, the non-literal similarity, and the many
apparent contradictions, and the uncertainty that dominate in human thinking
and sensory information about the real world?

 

And if so, how is your approach different than that of the Novamente/Pei
Wang-like approaches?

 

And if so, how well are your (was it) 80,000 lines of code of working at
actually representing and making sense of the shadows projected on the walls
of your AGI's cave by sensations (or data) from the real world.

 

Ed Porter,

 

P.S. Re CA:  maybe I am well versed in them but I don't know what the
acronym stands for.  If it wouldn't be too much trouble could you please
educate me on the subject?

 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56005098-c2de21

RE: [agi] An AGI Test/Prize

2007-10-20 Thread Edward W. Porter
So, do you or don't you model uncertainty, contradictory evidence, degree
of similarity, and all those good things?

And what is a CA, or don't i want to know?



Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 20, 2007 10:39 PM
To: agi@v2.listbox.com
Subject: RE: [agi] An AGI Test/Prize



Hi Edward,



I don’t see any problems dealing with either discrete or continuous. In
fact in some ways it’d be nice to eliminate discrete and just operate in
continuous mode. But discrete maps very well with binary computers.
Continuous is just a lot of discrete, the density depending on resources
or defined as ranges in sets, other descriptors, etc. different ways.



I’m not really well versed on NARS and Novamente so can’t comment on them
and they are light years down the road. They are basically in
implementation stage, closer to realized utility, more than just theories.



Oh those 55(80),000 lines of code are an AI product I am making so it is
not AGI but the thing has basically stubs for AGI or could be used by AGI.



But the methodology I am talking about seems to be very well workable with
data from the real world. It’s hard for me to find things that it doesn’t
work with although real tests need to be performed. BTW this type of
thinking I’m sure is well analyzed by many abstract algebra
mathematicians. Computability issues exist and these may make the theory
not workable to a certain degree. I actually don’t know enough about a lot
of this math to really work it through deeply for a feasibility study
(yet) and much of it is still up in the air…



John







What I found interesting is that, described at this very general level,
what this is saying is actually related to my view of AGI, except that it
appears to be based on a totally crisp, 1 or 0 view of the world.  If that
is correct, it may be very valuable in certain domains, with are
themselves totally or almost totally crisp, but it won’t work for most
human-like thinking, because most human concepts and what they describe in
the real world are not crisp.



THAT IS, UNLESS, YOU PLAN TO MODEL CONCEPTUAL FLUIDITY, ITSELF, IN A
TOTALLY CRISP, UNCERTAINTY-BASED, WAY, which is obviously doable at some
level.  I guess that is what you are referring to by saying our mind does
crisp thinking all the time.  Even most of us anti-crispies, plan to
implement our fluid system on digital machinery using binary
representation, which we hope will be crisp (but at the 22nm node it might
be a little less than totally crisp.)



But the issue is: do your crisp techniques efficiently learn and represent
the fluidity of mental concepts, the non-literal similarity, and the many
apparent contradictions, and the uncertainty that dominate in human
thinking and sensory information about the real world?



And if so, how is your approach different than that of the Novamente/Pei
Wang-like approaches?



And if so, how well are your (was it) 80,000 lines of code of working at
actually representing and making sense of the shadows projected on the
walls of your AGI’s cave by sensations (or data) from the “real” world.



Ed Porter,



P.S. Re “CA”:  maybe I am well versed in them but I don’t know what the
acronym stands for.  If it wouldn’t be too much trouble could you please
educate me on the subject?



  _

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
http://v2.listbox.com/member/?;
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56007871-ae3472

Re: [agi] An AGI Test/Prize

2007-10-19 Thread Benjamin Goertzel
Well, one problem is that the current mathematical definition of general
intelligence
is exactly that -- a definition of totally general intelligence, which is
unachievable
by any finite-resources AGI system...

On the other hand, IQ tests and such measure domain-specific capabiities as
much
as general learning ability  So human-oriented IQ tests are not so
important

I tend to think there should be some kind of test for general intelligence
that is based
on the requirement for self-understanding  Humans have fairly rich
dynamic internal models
of themselves, cockroaches don't, and dogs have only pretty lame ones...

Perhaps there could be a test that tries to measure the ability of a system
to predict its
own reaction to various novel situations?   This would require the system to
be able to
model itself internally...

However, it's still hard to make this kind of test objective in any sense,
as different AGI
systems will be adapted to different kinds of environments...

Still, this is an interesting measure by which we could compare the
self-understanding of
different systems that live in the same environments...

But, still, this is not really an objective measure for intelligence ...
just another sorta-interesting
sort of test...

-- Ben


On 10/19/07, John G. Rose [EMAIL PROTECTED] wrote:

 I think that there really needs to be more very specifically defined
 quantitative measures of intelligence. If there were questions that could
 be
 asked of an AGI that would require x units of intelligence to solve
 otherwise they would be unsolvable. I know that this is a hopeless foray
 on
 this list as there seems to be no basic mathematical definition of
 intelligence (someone correct me if I'm wrong). I do have in my mind a
 picture of what a minimalistic intelligence engine would look like so I
 feel
 strongly that there is a basic definition and the questions to test it
 would
 be in the form of feeding it successive layers of known background
 information and having it derive specific answers to questions based on
 that. Other qualities like creativity and imagination would need to be
 measured in other ways.

 John

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55532708-296773

Re: [agi] An AGI Test/Prize

2007-10-19 Thread Mike Tintner



John: I think that there really needs to be more very specifically defined
quantitative measures of intelligence. ...Other qualities like creativity 
and imagination would need to be

measured in other ways.


The only kind of intelligence you can measure with any precision is narrow 
AI - convergent intelligence. That's what education marks out of 100 in 
tests with right/wrong answers.


The other kind - AGI - divergent intelligence - can't be marked 
mathematically - can only be graded. That's what education gives grades to - 
Excellent/ Very Good/ Good/ Poor etc. You can't mark essays/ projects, for 
example,  with precision, (or indeed the socially creative projects like 
novels/ new business plans/.(Navemente/A2I)).  And they're half of education 
and half of intelligence. Sorry if that distresses those who can't live 
without maths - but that's life.


P.S. It's worth pointing out that there are  TWO kinds of intelligence - and 
there can be NO ARGUMENT about that here. You can argue about their 
definitions, not about their twoness. So Ben's and Pei's mono definitions 
of intelligence, for example, are a priori wrong. How can one be so 
dogmatic? Well, you guys have, for a start,  to be able to distinguish 
between narrow AI and AGI, (which by my maths makes two kinds of 
intelligence)  - otherwise you might as well cut your own throats 
professionally speaking.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55528758-3ab744


RE: [agi] An AGI Test/Prize

2007-10-19 Thread John G. Rose
I think that there really needs to be more very specifically defined
quantitative measures of intelligence. If there were questions that could be
asked of an AGI that would require x units of intelligence to solve
otherwise they would be unsolvable. I know that this is a hopeless foray on
this list as there seems to be no basic mathematical definition of
intelligence (someone correct me if I'm wrong). I do have in my mind a
picture of what a minimalistic intelligence engine would look like so I feel
strongly that there is a basic definition and the questions to test it would
be in the form of feeding it successive layers of known background
information and having it derive specific answers to questions based on
that. Other qualities like creativity and imagination would need to be
measured in other ways.

John 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55519579-9983dc


Re: [agi] An AGI Test/Prize

2007-10-18 Thread Benjamin Goertzel



 I guess, off the top of my head, the conversational equivalent might be a
 Story Challenge - asking your AGI to tell some explanatory story about a
 problem that had occurred to it recently, (designated by the tester),  and
 then perhaps asking it to devise a solution. Just my first thought - but
 the
 point is any AGI Test should be a focussed challenge, like the ICRA, not a
 vague Interview Test or similar.



Hmmm... the storytelling direction is interesting.

E.g., you could tell the first half of a story to the test-taker, and ask
them
to finish it...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=54984670-ce964d

Re: [agi] An AGI Test/Prize

2007-10-18 Thread Russell Wallace
On 10/18/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
 Hmmm... the storytelling direction is interesting.

 E.g., you could tell the first half of a story to the test-taker, and ask
 them
 to finish it...

Or better, draw an animation of (both halves of) it.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=54994261-20c078


Re: [agi] An AGI Test/Prize

2007-10-18 Thread Vladimir Nesov
I think AGI test should fundamentally be a learning ability test. When
there's a specified domain in which the system should demonstrate it
competency (like 'chatting' or 'playing Go'), it's likely easier to write
narrow solution. If system is not a RSI AI already, resulted competency
depends on quirks of given domain too much, and it's unclear how
improvements in general learning ability translate in competency.

I see such test along the lines of feeding the system a stream of frame-like
representations, and then it should be able to fill in the blanks in
incomplete representations based on analogies. It's general enough to be
AGI-complete, and simple enough to test existing narrow AI systems.
Depending on supplied data it can be taken out of reach of algorithms which
are too biased towards their narrow domain. Frame-like representations allow
to construct tasks of different complexity according to human intuition, and
likewise test their feasibility. This input stream shouldn't be too
cluttered (it shouldn't include things like cyc database, wikipedia, etc.),
but should assume zero knowledge.

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55007138-dd3f75