Re: [agi] Poll

2007-10-20 Thread J Storrs Hall, PhD
On Friday 19 October 2007 10:36:04 pm, Mike Tintner wrote:
 The best way to get people to learn is to make them figure things out for 
 themselves .

Yeah, right. That's why all Americans understand the theory of evolution so 
well, and why Britons have such an informed acceptance of 
genetically-modified foods. It's why Galileo had such an easy time convincing 
the Church that the earth goes around the sun. It's why the Romans widely 
adopted the steam engine following its invention by Heron of Alexandria. It's 
why the Inquisition quickly realized that witchcraft is a superstition, 
rather than burning innocent women at the stake.

The truth is exactly the opposite: Humans are built to propagate culture 
memetically, by copying each other; the amount we know individually by this 
process is orders of magnitude greater than what we could have figured out 
for ourselves. Reigning orthodoxy of thought is *very hard* to dislodge, even 
in the face of plentiful evidence to the contrary. 

Isaac Asimov famously said that the most exciting moment in science is when 
someone says, That's funny... But the reason to point it out is that it 
*doesn't* happen all the time, even in science (it's not normal science in 
Kuhn's phrase), and even less so outside of it. 

In the real world, when people get confused and work out a way around it, what 
they're learning is not an inventive synthesis of the substance at issue, but 
an attention filter. And that, for the average person, is usually just 
picking an authority figure.

Theirs not to reason why; theirs but to do and die.

Humans are *stupid*, Mike. You're still committing the superhuman human 
fallacy.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55686241-899d6e


RE: [agi] evolution-like systems

2007-10-20 Thread Edward W. Porter
Isn't one of the key concepts behind hierarchical memory (as described in
Jeff Hawkins's work, the Serre paper I have cited, Rodney Books's
subsumption, etc.) exactly that is builds hierarchically upon the
regularities and modularities of whatever word it is learning in, acting
in, and representing?
Ed Porter

-Original Message-
From: Robert Wensman [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 20, 2007 7:39 AM
To: agi@v2.listbox.com
Subject: Re: [agi] evolution-like systems



I am not exactly sure how GA(genetic algorithm) and GP (genetic
programming) is defined. It seems that the concept of gene and evolution
are very much interconnected, so how we define genetic algorithm and
genetic programming depends on how we define evolutionary learning, which
is partially a topic of this thread.

But to avoid this question and still answer your question, I could say
that GA and GP needs to be pretty advanced to quell the combinatorial
explosion that you speak of if it is to be used in AGI. Starting a naive
evolutionary process without trying to speed things up would be pointless.
It could take zillion of years to get anywhere, basically repeating the
biological evolution in a computer.

I believe Ben Goetzel and Novamente have some interesting points in this
topic. His basic point is that a lot of necessary AGI algorithms needs to
be exponential and prone to combinatorial explosions, so the key issue is
to interconnect a lot of different systems so they help each other to
overcome their inherent drawbacks. In their case they use a system of
evolutionary learning combined with probabilistic reasoning.

I have thought about another way to do the same thing, although my ideas
are far from thought out. My idea is that evolutionary learning to build a
world model needs to utilize the modularity of reality somehow to
factorize the adaptive process.

If an adaptive process is factorized, it could drastically decrease the
time necessary to perform it. This is a universal phenomenon and true
regardless of adaptive/learning/evolutionary algorithm. For example, the
most simple adaptive process just creates a model at random, and tests
whether it is correct. If a model is described using 32 bits, then the
time for adaption would be in the order of 2^32. But if the model can be
divided into two independent parts, the order of adaptation is only
2^16+2^16 = 2^17.

Fortunately in our world, objects are somewhat independent of each other.
I can rest decently assured that inner state and mechanics of my toaster,
does not interfere with the inner state and mechanics of my microwave
oven. This means I could hypothetically apply evolutionary learning on my
toaster and microwave oven separately, and factorize the learning process
in that way.

In addition, in our world there seems to be classes of objects of similar
design and function. If I understand the basics of a tree for example, I
can apply this model to many more trees in a forrest. These regularities
is also something that could be utilized to speed up adaptation, but maybe
in a different way.

So basically yes, making evolutionary learning work fast enough is what
AGI is all about. But I do not feel that these methods to try to speed
things up make it less of an evolution, at least not in my opinion. The
reason I like the concept of evolutionary learning is that it implies some
form of open endedness, similar to how we think the thoughts of an
intellect can go in any direction. The words learning and adaptation has
been too much used in narrow AI in over simplified contexts.

I would like to direct a question to Ben Goetzel if he happen to read
this. I am a fan of Novamente, and their ideas of quelling combinatorial
explosions. But I wonder if they ever thought along the lines presented
here, trying to factorize adaptation by using the modularity of reality.

/Robert W

2007/10/20, William Pearson [EMAIL PROTECTED]:

On 20/10/2007, Robert Wensman [EMAIL PROTECTED] wrote:
 It seems your question stated on the meta discussion level, since that
you
 ask for a reason why a there are two different beliefs.

 I can only answer for myself, but to me some form of evolutionary
learning
 is essential to AGI. Actually, I define intelligence to be an
Eco-system of
 ideas that compete for survival. The fitness of such ideas are
determined
 through three aspects:

The trouble with the word evolution, is that it brings to mind
Darwinian evolution which is rightly dismissed as slow and random.
Computational selectionist systems can be Lamarckian or the programs
can learn by themselves as well as being selected, so the speed limits
of Darwnian evolution do not apply. The central dogma of molecular
biology also does not apply.

However this does mean that you have to use systems more advanced than
GA or GP to avoid the criticisms of evolutionary systems being
adequate for intelligence.

Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To 

Re: [agi] An AGI Test/Prize

2007-10-20 Thread Gabriel Recchia
Has anyone come across (or written) any papers that argue for particular
low-level capabilities that any system capable of human-level intelligence
must possess, and which posits particular tests for assessing whether a
system possesses these prerequisites for intelligence?  I'm looking for
anything like this, or indeed anything that tries to lay out an incremental
path toward AGI with testable benchmarks along the way.  I'd be very
appreciative if anyone could point me to any such work.

Gabe


On 10/19/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:



 
  I largely agree. It's worth pointing out that Carnot published
  Reflections on
  the Motive Power of Fire and established the science of thermodynamics
  more
  than a century after the first working steam engines were built.
 
  That said, I opine that an intuitive grasp of some of the important
  elements
  in what will ultimately become the science of intelligence is likely to
  be
  very useful to those inventing AGI.
 


 Yeah, most certainly  However, an intuitive grasp -- and even a
 well-fleshed-out
 qualitative theory supplemented by heuristic back-of-the-envelope
 calculations
 and prototype results -- is very different from a defensible, rigorous
 theory that
 can stand up to the assaults of intelligent detractors

 I didn't start seriously trying to design  implement AGI until I felt I
 had a solid
 intuitive grasp of all related issues.  But I did make a conscious choice
 to devote
 more effort to utilizing my intuitive grasp to try to design and create
 AGI,
 rather than to creating better general AI theories  Both are worthy
 pursuits,
 and both are difficult.  I actually enjoy theory better.  But my sense is
 that the
 heyday of AGI theorizing is gonna come after AGI experimentation has
 progressed
 a good bit further than it has today...

 -- Ben G

 --
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55733424-f2512b

Re: [agi] Human memory and number of synapses

2007-10-20 Thread Mark Waser
What I'd like is a mathematical estimate of why a graphic or image (or any 
form of physical map) is a vastly - if not infinitely - more efficient way 
to store information than a set of symbols.


Yo troll . . . . a graphic or image is *not* a vastly - if not infinitely - 
more efficient way to store information than a set of symbols.


Take your own example of an outline map -- *none* of the current high-end 
mapping services (MapQuest, Google Maps, etc) store their maps as images. 
They *all* store them symbolicly in a relational database because that is 
*the* most efficient way to store them so that they can produce all of the 
different scale maps and directions that they provide every day.


Congratulations!  You've just disproved your prime pet theory.  (or do you 
believe that you're smarter than all of those engineers?)


And all this is important, because it will affect estimates of what brains 
and computers can do.


What brains can do and what computers can do are very different.  The brain 
evolved by a linear optimization process with *numerous* non-brain-related 
constraints because of all of the spaghetti-code-like intertwining of all 
the body's systems.  It is quite probable that the brain can be optimized a 
lot!



(No computer can yet store and read a map as we do, can it?)


What can you do with a map that Google Maps can't?  Google Maps may not 
store and read maps like you do, but functionally it is better than you 
(faster, more info, etc.).



- Original Message - 
From: Mike Tintner [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, October 20, 2007 7:33 AM
Subject: Re: [agi] Human memory and number of synapses



Vlad et al,

Slightly O/T - while you guys are arguing about how much info the brain 
stores and processes...


What I'd like is a mathematical estimate of why a graphic or image (or any 
form of physical map) is a vastly - if not infinitely - more efficient way 
to store information than a set of symbols.


Take an outline map of a country, with points for the towns. That map 
contains a practically endless amount of info about the relationships 
between all the towns and every point in the country - about how distant 
they all are from each other - and therefore about every possible travel 
route across the country.


Now try expressing that info as a set of symbolic relationships - London 
to York 300 Miles, London to Oxford 60 Miles, London to Cardiff 200 
miles - and so on and on.


If you think just about the ink or whatever substrate is used to write the 
info, the map is vastly more efficient.


And all this is important, because it will affect estimates of what brains 
and computers can do. A great deal of the brain's memory is stored, I 
suggest, in the form of maps of one kind or other. (No computer can yet 
store and read a map as we do, can it?)



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55735545-d56dca


Re: [agi] Poll

2007-10-20 Thread A. T. Murray
 [...]
 Reigning orthodoxy of thought is *very hard* to dislodge, 
 even in the face of plentiful evidence to the contrary. 

Amen, brother! Rem acu tetigisti! That's why

http://mentifex.virtualentity.com/theory5.html 

is like the small mammals scurrying beneath dinosaurs.

ATM
--
http://mind.sourceforge.net/aisteps.html 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55739919-943f4b


Re: [agi] An AGI Test/Prize

2007-10-20 Thread David McFadzean
On 10/19/07, Matt Mahoney [EMAIL PROTECTED] wrote:

 http://www.vetta.org/documents/ui_benelearn.pdf

 Unfortunately the test is not computable.

True but how about testing intelligence by comparing the performance
of an agent across several computable environments (randomly-generated
finite games) to the performance of a random agent? I suspect this
measure would provide a reasonable estimate of Hutter's definition and
could be made arbitrarily more accurate by increasing the number and
complexity of test environments.

David

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55740663-05e954


Re: [agi] Human memory and number of synapses.. P.S.

2007-10-20 Thread Mike Dougherty
On 10/20/07, Mark Waser [EMAIL PROTECTED] wrote:
 Images are *not* an efficient way to store data.  Unless they are
 three-dimensional images, they lack data.  Normally, they include a lot of
 unnecessary or redundant data.  It is very, very rare that a computer stores
 any but the smallest image without compressing it.  And remember, an image
 can be stored as symbols in a relational database very easily as a set of
 x-coords, y-coords, and colors.

maps ARE symbols.  Whether it's a paper street map or Google maps,
they're a collection of simple symbols that represent the objects
they're mapping.  At the most ridiculous, each pixel on the screen
is a symbol that your optic nerve detects and passes to your brain to
find some meaningful correspondence to interpret.

I think the point that Mark is making is that the representation
(display) of data can resemble a map - but the map (or image) is
only one possible interpretation of the data.  There are algorithms to
provide close-enough approximations of details where there is
insufficient data.  ex:  It is unlikely that an elevation map would
have a 1000 meter variance over a 2 meter gap in the data points if
either side of the gap are equal elevations.  That kind of 'smoothing'
can not be done with images alone - there must be data. If you do have
only map images, you would have to extract data from the map before
you can use it effectively against other data.  So why store the data
in an image in the first place?  Arguably, the data storage mechanism
is irrelevant - there will be decisions made about performance
depending on the initial acquisition and later retrieval realities:
maybe a camera streams video directly to disk to achieve high
throughput, then later analysis compresses the scene into a symbolic
representation at less than a realtime rate.  You can't really argue
that the video stream is an ideal way to manage the details in a
knowledgebase. (eh Mike?)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55761625-a2d246


RE: [agi] An AGI Test/Prize

2007-10-20 Thread Edward W. Porter
I guess I am mundane.  I don’t spend a lot of time thinking about a
“definition of intelligence.”  Goertzel’s is good enough for me.



Instead I think in  terms of what I want these machines to do -- which
includes human-level:



-NL understanding and generation (including discourse level)

-Speech recognition and generation (including appropriate pitch and volume
modulation)

-Non-speech auditory recognition and generation

-Visual recognition and real time video generation

-World-knowledge representation, understanding and reasoning

-Computer program understanding and generation

-Common sense reasoning

-Cognition

-Context sensitivity

-Automatic learning

-Intuition

-Creativity

-Inventiveness

-Understanding human nature and human desires and goals(not expecting full
human-level here)

-Ability to scan and store and, over time, convert and incorporate into
learned deep structure vast amounts of knowledge including ultimately all
available recorded knowledge





To do such thinking I have come up with a fairly uniform approach to all
these tasks, so I guess you could call that approach something approaching
a theory of intelligence.  But I mainly think of it as a theory of how
to get certain really cool things done.



I don’t expect to get what is listed all at once, but, barring some major
set back, this will probably all happen (with perhaps partial exception on
the last item) within twenty years, and with the right people getting big
money most of it could substantially all happen in ten.



In addition, as we get closer to the threshold I think “intelligence” (at
least from our perspective) should include:



-helping make individual people, human organizations, and human government
more intelligent, happy, cooperative, and peaceful

-helping creating a transition into the future that is satisfying for most
humans


Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]

-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 20, 2007 1:27 PM
To: agi@v2.listbox.com
Subject: RE: [agi] An AGI Test/Prize



Interesting background about on some thermodynamics history J.



But basic definitions of intelligence, not talking about reinventing
particle physics here, a basic, workable definition, not rigorous
mathematical proof just something simple. AI, AGI c’mon not asking for
tooo much. In my mind it is not looking that sophisticated at the atomic
level and it seems like it is VERY applicable for implementation if not
required for testing. Though Hutter and Legg are apparently working
diligently on this stuff and have a lot papers.



John





I largely agree. It's worth pointing out that Carnot published
Reflections on
the Motive Power of Fire and established the science of thermodynamics
more
than a century after the first working steam engines were built.

That said, I opine that an intuitive grasp of some of the important
elements
in what will ultimately become the science of intelligence is likely to be
very useful to those inventing AGI.



Yeah, most certainly  However, an intuitive grasp -- and even a
well-fleshed-out
qualitative theory supplemented by heuristic back-of-the-envelope
calculations
and prototype results -- is very different from a defensible, rigorous
theory that
can stand up to the assaults of intelligent detractors

I didn't start seriously trying to design  implement AGI until I felt I
had a solid
intuitive grasp of all related issues.  But I did make a conscious choice
to devote
more effort to utilizing my intuitive grasp to try to design and create
AGI,
rather than to creating better general AI theories  Both are worthy
pursuits,
and both are difficult.  I actually enjoy theory better.  But my sense is
that the
heyday of AGI theorizing is gonna come after AGI experimentation has
progressed
a good bit further than it has today...




  _

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
http://v2.listbox.com/member/?;
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55773358-059800

Re: [agi] An AGI Test/Prize

2007-10-20 Thread Robert Wensman
Regarding testing grounds for AGI. Personally I feel that ordinary computer
games could provide an excellent proving ground for the early stages of AGI,
or maybe even better if they are especially constructed. Computer games are
usually especially designed to encourage the player towards creativity and
exploration. Take a simple platform game for example, at every new stage new
graphics and monsters are introduced, and in large, the player undergoes a
continuous self training that last throughout the whole game. Game
developers carefully distribute rewards and challenges to make this learning
process as smooth as possible.

But also I would like to say that given any proving ground for the first
stages of AGI could be misused if AGI designers bring specialized code into
their system. So if there is to be a competition for first generation AGI,
there would have to be some referee that evaluates how much domain specific
knowledge has been encoded to any given system.

For the late development stages of AGI, where we basically have virtual
human minds, then we could use so hard problems that specialized code could
not help the AGI system anymore. But I guess that at that time we have
basically already solved the problem of AGI, and competitions where AGI
systems compete in writing essays on some subject, could only be used to
polish some already outlined solution to AGI.

I am a fan of Novamente, but for example when I watched the movie where they
trained an AGI dog, I was left with the question about what parts of its
cognition was specialization. For example, the human teacher used natural
language to talk to the dog. Did the dog understand any of it, and in that
case, was there any special language module involved? Also, training a dog
is quite open ended, and it is difficult to assess what is progress. This
shows just how difficult it is to demonstrate AGI. Any demonstration of AGI
would have to support a list of what cognitive aspects are coded, and which
are learnt. Only then you can understand whether it is impressive or not.

Also, because we need to have firm rules about what can be pre-programmed,
and what needs to be learnt, it is easier if we used some world with pretty
simple mechanics. What I basically would like to see is an AGI learning to
play a certain computer game, starting by learning the fundamentals, and
then playing it to the end. Take an old videogame classic like The Legend of
Zelda. http://www.zelda.com/universe/game/zelda/. I know a lot of you would
say that this is a far to simplistic world for training an AGI, but not if
you prohibit ANY pre-programmed knowledge. You only allow the AGI system to
start with proto-knowledge representation, and basically hard-wire the
in-game rewards and punishemnts to the goal of the AGI. The AGI system would
then have to learn basic concepts such as:

objects moving around on the screen
which graphics correspond to yourself
walls where you can go
keys that opens doors
the concept of coming to a new screen when walking of the edge of one
how screens relate to each other
teleportation (the flute for anyone who remembers)

If the AGI system then can learn to play the game to the end and slay
Ganon based on only proto-knowledge, then maybe we have some interesting
going on. Such an AGI could maybe be compared to a rodent running in a maze,
even if the motoric and vision system are more complicated. Then we are
ready to increase the complexity of the computer game, adding communication
with other characters, more complex concepts and puzzles, more dimensions,
more motorics etc..

Basically, I would like to se Novamente and similar AGI systems play some
goal oriented computer game, since AGI in itself needs to be goal oriented.

/R



2007/10/20, Benjamin Goertzel [EMAIL PROTECTED]:



 
  I largely agree. It's worth pointing out that Carnot published
  Reflections on
  the Motive Power of Fire and established the science of thermodynamics
  more
  than a century after the first working steam engines were built.
 
  That said, I opine that an intuitive grasp of some of the important
  elements
  in what will ultimately become the science of intelligence is likely to
  be
  very useful to those inventing AGI.
 


 Yeah, most certainly  However, an intuitive grasp -- and even a
 well-fleshed-out
 qualitative theory supplemented by heuristic back-of-the-envelope
 calculations
 and prototype results -- is very different from a defensible, rigorous
 theory that
 can stand up to the assaults of intelligent detractors

 I didn't start seriously trying to design  implement AGI until I felt I
 had a solid
 intuitive grasp of all related issues.  But I did make a conscious choice
 to devote
 more effort to utilizing my intuitive grasp to try to design and create
 AGI,
 rather than to creating better general AI theories  Both are worthy
 pursuits,
 and both are difficult.  I actually enjoy theory better.  But my sense is
 that the
 heyday of AGI 

RE: [agi] An AGI Test/Prize

2007-10-20 Thread John G. Rose
No you are not mundane. All these things on the list (or most) are very well
to be expected from a generally intelligent system or its derivatives. But I
have this urge, being a software developer, to smash all these things up
into their constituent components, partition commonalties, eliminate dupes,
and perhaps further smash up into an atomic representation of intelligence
as little intelligent engines that can be combined in various ways to build
higher level functions. Kind of like a cellular automata approach and
perhaps CA structures can be used. I really don't want to waste 10 years
developing a giant piece of bloatage code that never fully works. Better to
exhaust all possibilities in the mind and on paper as much as possible as
software dev can be a giant PIA mess if not thought out beforehand as much
as possible. Yes you can go so far before doing prototyping and testing but
certain prototypes can take many months to build.

 

Several on this email list have already gotten to this point and it may be
more productive digesting their systems instead of reinventing.  Even so
that leaves many questions open about testing. Someone can claim they have
AGI but how do you really know, could be just a highly sophisticated
chatterbot.

 

 John

 

 

From: Edward W. Porter [mailto:[EMAIL PROTECTED] 



I guess I am mundane.  I don't spend a lot of time thinking about a
definition of intelligence.  Goertzel's is good enough for me.  

 

Instead I think in  terms of what I want these machines to do -- which
includes human-level:

 

-NL understanding and generation (including discourse level)

-Speech recognition and generation (including appropriate pitch and volume
modulation)

-Non-speech auditory recognition and generation

-Visual recognition and real time video generation

-World-knowledge representation, understanding and reasoning

-Computer program understanding and generation

-Common sense reasoning

-Cognition

-Context sensitivity

-Automatic learning

-Intuition

-Creativity

-Inventiveness

-Understanding human nature and human desires and goals(not expecting full
human-level here)

-Ability to scan and store and, over time, convert and incorporate into
learned deep structure vast amounts of knowledge including ultimately all
available recorded knowledge

.

 

To do such thinking I have come up with a fairly uniform approach to all
these tasks, so I guess you could call that approach something approaching
a theory of intelligence.  But I mainly think of it as a theory of how to
get certain really cool things done.

 

I don't expect to get what is listed all at once, but, barring some major
set back, this will probably all happen (with perhaps partial exception on
the last item) within twenty years, and with the right people getting big
money most of it could substantially all happen in ten.

 

In addition, as we get closer to the threshold I think intelligence (at
least from our perspective) should include:

 

-helping make individual people, human organizations, and human government
more intelligent, happy, cooperative, and peaceful

-helping creating a transition into the future that is satisfying for most
humans

 

Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55777112-33cf1e

RE: [agi] An AGI Test/Prize

2007-10-20 Thread John G. Rose
Well I'm neck deep in 55,000 semi-colons of code in this AI app I'm building
and need to get this bastich out the do' and it's probably going to grow to
80,000 before version 1.0. But at some point it needs to grow a brain. Yes I
have my AGI design in mind since late 90's and had been watching what would
happen with Intelligenesis. I particularly think along the lines of an
abstract algebra based engine basically an algebraic structure pump.
Everything is sets and operators with probability glue, a lot of SOM and AI
sensory. But recent ideas in category theory are molding it and CA's are
always rearing their tiny little heads.. 

 

But spending time digesting Novamente theory and then general AGI structure
that's valuable. Things like - graph storage indexing methodology - really
need to spend some time on that especially learning from the people with
experience. 

 

John

 

 

From: Benjamin Goertzel [mailto:[EMAIL PROTECTED] 

On 10/20/07, Edward W. Porter [EMAIL PROTECTED] wrote: 

John, 

 

So rather than a definition of intelligence you want a recipe for how to
make a one?

 

Goertzel's descriptions of Novamente in his two recent books are the
closest, publicly-available approximation of that of which I currently know.

 


Actually my book on how to build an AGI are not publicly available at this
point ... but I'm strongly leaning toward making them so ... it's mostly
just a matter of finding time to proofread them, remove obsolete ideas, etc.
and generally turn them from draft manuscripts into finalized manuscripts.
I have already let a bunch of people read the drafts... 

Of course, a problem with putting material like this in dead-tree form is
that the ideas are evolving.  We learn new stuff as we proceed through
implementing the stuff in the books  But the basic framework (knowledge
rep, algorithms, cognitive architecture, teaching methodology) has not
changed as we've proceed through the work so far, just some of the details
(wherein the devil famously lies ;-) 




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55787440-e8ac33

Re: [agi] An AGI Test/Prize

2007-10-20 Thread Benjamin Goertzel
Ah, gotcha...

The recent book Advances in Artificial General Intelligence gives a bunch
more detail than those, actually (though not as much of the conceptual
motivations as The Hidden Pattern) ... but not nearly as much as the
not-yet-released stuff...

-- Ben

On 10/20/07, Edward W. Porter [EMAIL PROTECTED] wrote:

  Ben,

 The books I was referring to were The Hidden Pattern and Artificial
 General Intelligence, both of which I purchased from Amazon.  I know you
 have a better description, but what is in these two books is quite helpful.

 Ed Porter

  -Original Message-
 *From:* Benjamin Goertzel [mailto:[EMAIL PROTECTED]
 *Sent:* Saturday, October 20, 2007 4:01 PM
 *To:* agi@v2.listbox.com
 *Subject:* Re: [agi] An AGI Test/Prize


 On 10/20/07, Edward W. Porter [EMAIL PROTECTED] wrote:

 John,



 So rather than a definition of intelligence you want a recipe for how to
 make a one?



 Goertzel's descriptions of Novamente in his two recent books are the
 closest, publicly-available approximation of that of which I currently
 know.


 Actually my book on how to build an AGI are not publicly available at this
 point ... but I'm strongly leaning toward making them so ... it's mostly
 just a matter of finding time to proofread them, remove obsolete ideas, etc.
 and generally turn them from draft manuscripts into finalized manuscripts.
 I have already let a bunch of people read the drafts...

 Of course, a problem with putting material like this in dead-tree form is
 that the ideas are evolving.  We learn new stuff as we proceed through
 implementing the stuff in the books  But the basic framework (knowledge
 rep, algorithms, cognitive architecture, teaching methodology) has not
 changed as we've proceed through the work so far, just some of the details
 (wherein the devil famously lies ;-)

 -- Ben


  Edward W. Porter
  Porter  Associates
  24 String Bridge S12
  Exeter, NH 03833
  (617) 494-1722
  Fax (617) 494-1822
  [EMAIL PROTECTED]
 
   -Original Message-
  *From:* John G. Rose [mailto:[EMAIL PROTECTED]
  *Sent:* Saturday, October 20, 2007 3:16 PM
  *To:* agi@v2.listbox.com
  *Subject:* RE: [agi] An AGI Test/Prize
 
   No you are not mundane. All these things on the list (or most) are very
  well to be expected from a generally intelligent system or its derivatives.
  But I have this urge, being a software developer, to smash all these things
  up into their constituent components, partition commonalties, eliminate
  dupes, and perhaps further smash up into an atomic representation of
  intelligence as little intelligent engines that can be combined in various
  ways to build higher level functions. Kind of like a cellular automata
  approach and perhaps CA structures can be used. I really don't want to waste
  10 years developing a giant piece of bloatage code that never fully works.
  Better to exhaust all possibilities in the mind and on paper as much as
  possible as software dev can be a giant PIA mess if not thought out
  beforehand as much as possible. Yes you can go so far before doing
  prototyping and testing but certain prototypes can take many months to
  build.
 
 
 
  Several on this email list have already gotten to this point and it may
  be more productive digesting their systems instead of reinventing…  Even so
  that leaves many questions open about testing. Someone can claim they have
  AGI but how do you really know, could be just a highly sophisticated
  chatterbot.
 
 
 
   John
 
 
 
 
 
  *From:* Edward W. Porter [mailto:[EMAIL PROTECTED]
 
   I guess I am mundane.  I don't spend a lot of time thinking about a
  definition of intelligence.  Goertzel's is good enough for me.
 
 
 
  Instead I think in  terms of what I want these machines to do -- which
  includes human-level:
 
 
 
  -NL understanding and generation (including discourse level)
 
  -Speech recognition and generation (including appropriate pitch and
  volume modulation)
 
  -Non-speech auditory recognition and generation
 
  -Visual recognition and real time video generation
 
  -World-knowledge representation, understanding and reasoning
 
  -Computer program understanding and generation
 
  -Common sense reasoning
 
  -Cognition
 
  -Context sensitivity
 
  -Automatic learning
 
  -Intuition
 
  -Creativity
 
  -Inventiveness
 
  -Understanding human nature and human desires and goals(not expecting
  full human-level here)
 
  -Ability to scan and store and, over time, convert and incorporate into
  learned deep structure vast amounts of knowledge including ultimately all
  available recorded knowledge
 
  .
 
 
 
  To do such thinking I have come up with a fairly uniform approach to all
  these tasks, so I guess you could call that approach something
  approaching a theory of intelligence.  But I mainly think of it as a
  theory of how to get certain really cool things done.
 
 
 
  I don't expect to get what is listed all at once, but, barring some
  major set back, this will probably 

RE: [agi] An AGI Test/Prize

2007-10-20 Thread Edward W. Porter
John,



“[A]bstract algebra based engine” that’s “basically an algebraic structure
pump” sounds really exotic.  I’m visualizing a robo-version of my ninth
grade algebra teacher on speed.



If its not giving away the crown jewels, what in the hell is it and how
does it fit into to AGI?



And what are the always rearing “CA”s, you know, the ones with the tiny
little heads?



 Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 20, 2007 4:44 PM
To: agi@v2.listbox.com
Subject: RE: [agi] An AGI Test/Prize



Well I’m neck deep in 55,000 semi-colons of code in this AI app I’m
building and need to get this bastich out the do’ and it’s probably going
to grow to 80,000 before version 1.0. But at some point it needs to grow a
brain. Yes I have my AGI design in mind since late 90’s and had been
watching what would happen with Intelligenesis. I particularly think along
the lines of an abstract algebra based engine basically an algebraic
structure pump. Everything is sets and operators with probability glue, a
lot of SOM and AI sensory. But recent ideas in category theory are molding
it and CA’s are always rearing their tiny little heads….



But spending time digesting Novamente theory and then general AGI
structure that’s valuable. Things like - graph storage indexing
methodology - really need to spend some time on that especially learning
from the people with experience.



John





From: Benjamin Goertzel [mailto:[EMAIL PROTECTED]

On 10/20/07, Edward W. Porter [EMAIL PROTECTED] wrote:

John,



So rather than a definition of intelligence you want a recipe for how to
make a one?



Goertzel's descriptions of Novamente in his two recent books are the
closest, publicly-available approximation of that of which I currently
know.




Actually my book on how to build an AGI are not publicly available at this
point ... but I'm strongly leaning toward making them so ... it's mostly
just a matter of finding time to proofread them, remove obsolete ideas,
etc. and generally turn them from draft manuscripts into finalized
manuscripts.  I have already let a bunch of people read the drafts...

Of course, a problem with putting material like this in dead-tree form is
that the ideas are evolving.  We learn new stuff as we proceed through
implementing the stuff in the books  But the basic framework
(knowledge rep, algorithms, cognitive architecture, teaching methodology)
has not changed as we've proceed through the work so far, just some of the
details (wherein the devil famously lies ;-)




  _

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
http://v2.listbox.com/member/?;
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55814342-eb1811

Re: [agi] An AGI Test/Prize

2007-10-20 Thread Benjamin Goertzel
On 10/20/07, Edward W. Porter [EMAIL PROTECTED] wrote: You mean I wasted my
time and money by buying and reading the Novamente article in Artificial
General Intelligence when I could have bought the new and improved
Advances in Artificial General Intelligence.  What a rip off!

Ed

(((

Bummer, eh?  ;-)

Seriously though: The articles on NM in the newer AGI edited volume don't
review the overall NM architecture and design as thoroughly as the article
on NM in the older AGI edited volume.   We tried not to be redundant in
writing the NM articles for the new volume.  However, the articles in the
new volume do go into more detail on various specific aspects of the NM
system.

One problem with the original (older) Artificial General Intelligence book
is that the articles in it were actually written in 2002, but the book did
not appear until 2006!  This was because of various delays associated with
the publishing process, which fortunately were not repeated with the newer
volume...

The good news is, the articles on NM in the newer AGI edited volume are
available online at the AGIRI.org website, on the page devoted to the 2006
AGIRI workshop...

http://www.agiri.org/forum/index.php?act=STf=21t=23

-- Ben


 -Original Message-
 *From:* Benjamin Goertzel [mailto:[EMAIL PROTECTED]
 *Sent:* Saturday, October 20, 2007 5:24 PM
 *To:* agi@v2.listbox.com
 *Subject:* Re: [agi] An AGI Test/Prize


 Ah, gotcha...

 The recent book Advances in Artificial General Intelligence gives a
 bunch more detail than those, actually (though not as much of the conceptual
 motivations as The Hidden Pattern) ... but not nearly as much as the
 not-yet-released stuff...

 -- Ben

 On 10/20/07, Edward W. Porter [EMAIL PROTECTED] wrote:
 
   Ben,
 
  The books I was referring to were The Hidden Pattern and Artificial
  General Intelligence, both of which I purchased from Amazon.  I know you
  have a better description, but what is in these two books is quite helpful.
 
  Ed Porter
 
   -Original Message-
  *From:* Benjamin Goertzel [mailto:[EMAIL PROTECTED]
  *Sent:* Saturday, October 20, 2007 4:01 PM
  *To:* agi@v2.listbox.com
  *Subject:* Re: [agi] An AGI Test/Prize
 
 
  On 10/20/07, Edward W. Porter  [EMAIL PROTECTED] wrote:
 
  John,
 
 
 
  So rather than a definition of intelligence you want a recipe for how to
  make a one?
 
 
 
  Goertzel's descriptions of Novamente in his two recent books are the
  closest, publicly-available approximation of that of which I currently
  know.
 
 
  Actually my book on how to build an AGI are not publicly available at
  this point ... but I'm strongly leaning toward making them so ... it's
  mostly just a matter of finding time to proofread them, remove obsolete
  ideas, etc. and generally turn them from draft manuscripts into finalized
  manuscripts.  I have already let a bunch of people read the drafts...
 
  Of course, a problem with putting material like this in dead-tree form
  is that the ideas are evolving.  We learn new stuff as we proceed through
  implementing the stuff in the books  But the basic framework (knowledge
  rep, algorithms, cognitive architecture, teaching methodology) has not
  changed as we've proceed through the work so far, just some of the details
  (wherein the devil famously lies ;-)
 
  -- Ben
 
 
Edward W. Porter
   Porter  Associates
   24 String Bridge S12
   Exeter, NH 03833
   (617) 494-1722
   Fax (617) 494-1822
   [EMAIL PROTECTED]
  
-Original Message-
   *From:* John G. Rose [mailto: [EMAIL PROTECTED]
   *Sent:* Saturday, October 20, 2007 3:16 PM
   *To:* agi@v2.listbox.com
   *Subject:* RE: [agi] An AGI Test/Prize
  
No you are not mundane. All these things on the list (or most) are
   very well to be expected from a generally intelligent system or its
   derivatives. But I have this urge, being a software developer, to smash 
   all
   these things up into their constituent components, partition commonalties,
   eliminate dupes, and perhaps further smash up into an atomic 
   representation
   of intelligence as little intelligent engines that can be combined in
   various ways to build higher level functions. Kind of like a cellular
   automata approach and perhaps CA structures can be used. I really don't 
   want
   to waste 10 years developing a giant piece of bloatage code that never 
   fully
   works. Better to exhaust all possibilities in the mind and on paper as 
   much
   as possible as software dev can be a giant PIA mess if not thought out
   beforehand as much as possible. Yes you can go so far before doing
   prototyping and testing but certain prototypes can take many months to
   build.
  
  
  
   Several on this email list have already gotten to this point and it
   may be more productive digesting their systems instead of reinventing…  
   Even
   so that leaves many questions open about testing. Someone can claim they
   have AGI but how do you really know, could be just a highly sophisticated
  

RE: [agi] An AGI Test/Prize

2007-10-20 Thread John G. Rose
http://en.wikipedia.org/wiki/Algebraic_structure

 

http://en.wikipedia.org/wiki/Cellular_automata

 

Start reading..

 

John

 

 

From: Edward W. Porter [mailto:[EMAIL PROTECTED] 



 

John,

 

[A]bstract algebra based engine that's basically an algebraic structure
pump sounds really exotic.  I'm visualizing a robo-version of my ninth
grade algebra teacher on speed.  

 

If its not giving away the crown jewels, what in the hell is it and how does
it fit into to AGI?

 

And what are the always rearing CAs, you know, the ones with the tiny
little heads?

 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55818972-3fd4c5

Re: Images aren't best WAS Re: [agi] Human memory and number of synapses

2007-10-20 Thread Charles D Hixson

Let me take issue with one point (most of the rest I'm uninformed about):
Relational databases aren't particularly compact.  What they are is 
generalizable...and even there...
The most general compact database is a directed graph.  Unfortunately, 
writing queries for retrieval requires domain knowledge, and so does 
designing the db files.  A directed graph db is (or rather can be) also 
more compact than a relational db.


The reason that relational databases won out was because it was easy to 
standardize them.  Prior to them, most dbs were hierarchical.  This was 
also more efficient than relational databases, but was less flexible.  
The net databases existed, but were more difficult to use.


My suspicion is that we've evolved to use some form of net db storage.  
Probably one that's equivalent to a partial directed graph (i.e., some, 
but not all, node links are bidirectional).  This is probably the most 
efficient form that we know of.  It's also a quite difficult one to 
learn.  But some problems can't be adequately represented by anything 
else.  (N.B.:  It's possible to build a net db within a relational 
db...but the overhead will kill you.  It's also possible to build a 
relational db within a net db, but sticking the normal form discipline 
is nigh unto impossible.  That's not the natural mode for a net db.  So 
the Relational db is probably the db analog of Turing complete...but 
when presented with a problem that doesn't fit, it's also about as 
efficient as a Turing machine.  So this isn't an argument that you 
REALLY can't use a relational db for all of your representations, but 
rather that it's a really bad idea.)


Mark Waser wrote:
But how much information is in a map, and how much in the 
relationship database? Presumably you can put some v. rough figures 
on that for a given country or area. And the directions presumably 
cover journeys on roads? Or walks in any direction and between any 
spots too?


All of the information in the map is in the relational database 
because the actual map is produced from the database (and information 
doesn't appear from nowhere).  Or, to be clearer, almost *any* map you 
can buy today started life in a relational database.  That's how the 
US government stores it's maps.  That's how virtually all modern map 
printers store their maps because it's the most efficient way to store 
map information.


The directions don't need to assume roads.  They do so because that is 
how cars travel.  The same algorithms will handle hiking paths.  Very 
slightly different algorithms will handle off-road/off-path and will 
even take into account elevation, streams, etc. -- so, to clearly 
answer your question --  the modern map program can do everything that 
you can do with a map (and even if it couldn't, the fact that the map 
itself is produced solely from the database eliminates your original 
query).




- Original Message - From: Mike Tintner 
[EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, October 20, 2007 9:59 AM
Subject: Re: [agi] Human memory and number of synapses


MW: Take your own example of an outline map -- *none* of the 
current high-end
mapping services (MapQuest, Google Maps, etc) store their maps as 
images. They *all* store them symbolicly in a relational database 
because that is *the* most efficient way to store them so that they 
can produce all of the different scale maps and directions that they 
provide every day.


But how much information is in a map, and how much in the 
relationship database? Presumably you can put some v. rough figures 
on that for a given country or area. And the directions presumably 
cover journeys on roads? Or walks in any direction and between any 
spots too?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55822072-b1bb8e


Re: [agi] An AGI Test/Prize

2007-10-20 Thread Vladimir Nesov
On 10/21/07, John G. Rose [EMAIL PROTECTED] wrote:

  http://en.wikipedia.org/wiki/Algebraic_structure



 http://en.wikipedia.org/wiki/Cellular_automata



 Start reading….


John,

It doesn't really help in understanding how system described by such terms
is related to implementation of AGI. It sounds pretty much like I use a
Turing Machine, but with more exotic equivalent. If you could be more
specific, it'd be interesting to have at least a rough picture of what your
approach is about.


-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55822610-45b851

Re: [agi] Human memory and number of synapses.. P.S.

2007-10-20 Thread Charles D Hixson

FWIW:
A few years (decades?) ago some researchers took PET scans of people who 
were imagining a rectangle rotating (in 3-space, as I remember).  They 
naturally didn't get much detail, but what they got was consistent with 
people applying a rotation algorithm within the visual cortex.  This 
matches my internal reporting of what happens.


Parallel processors optimize things differently than serial processors, 
and this wasn't a stored image.  But it was consistent with an array of 
cells laid out in a rectangle activating, and having that activation 
precess as the image was visualized to rotate. 

Well, the detail wasn't great, and I never heard that it went anywhere 
after the initial results.  (Somebody probably got a doctorate...and 
possibly left to work elsewhere.)  But it was briefly written up in the 
popular science media (New Scientist? Brain-Mind Bulletin?) 

Anyway there's low resolution, possibly unconfirmed, evidence that when 
we visualize images, we generate a cell activation pattern within the 
visual cortex that has an activation boundary approximating in shape the 
object being visualized.  (This doesn't say anything about how the 
information is stored.)



Mark Waser wrote:
Another way of putting my question/ point is that a picture (or map) 
of your face is surely a more efficient, informational way to store 
your face than any set of symbols - especially if a doctor wants to 
do plastic surgery on it, or someone wants to use it for any design 
purpose whatsoever?


No, actually, most plastic surgery planning programs map your face as 
a limited set of three dimensional points, not an image.  This allows 
for rotation and all sorts of useful things.  And guess where they 
store this data . . . . a relational database -- just like any other 
CAD program.


Images are *not* an efficient way to store data.  Unless they are 
three-dimensional images, they lack data.  Normally, they include a 
lot of unnecessary or redundant data.  It is very, very rare that a 
computer stores any but the smallest image without compressing it.  
And remember, an image can be stored as symbols in a relational 
database very easily as a set of x-coords, y-coords, and colors.


You're stuck on a crackpot idea with no proof and plenty of 
counter-examples.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55823366-4cdb11


[agi] Re: Images aren't best

2007-10-20 Thread Mark Waser

Let me take issue with one point (most of the rest I'm uninformed about):
So this isn't an argument that you REALLY can't use a relational db for 
all of your representations, but rather that it's a really bad idea.)


I agree completely.  The only point that I was trying to hammer home was 
that a graphic or image is NOT a vastly - if not infinitely - more 
efficient way to store information (which was the troll's original 
statement).  I would certainly pick and choose my representation schemes 
based upon what I want to do (and I agree fully that partial directed graphs 
and hyperdbs are both probably necessary for a lot of things and not 
effectively isomorphic to current relational db technology).



- Original Message - 
From: Charles D Hixson [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, October 20, 2007 6:40 PM
Subject: Re: Images aren't best WAS Re: [agi] Human memory and number of 
synapses




Let me take issue with one point (most of the rest I'm uninformed about):
Relational databases aren't particularly compact.  What they are is 
generalizable...and even there...
The most general compact database is a directed graph.  Unfortunately, 
writing queries for retrieval requires domain knowledge, and so does 
designing the db files.  A directed graph db is (or rather can be) also 
more compact than a relational db.


The reason that relational databases won out was because it was easy to 
standardize them.  Prior to them, most dbs were hierarchical.  This was 
also more efficient than relational databases, but was less flexible.  The 
net databases existed, but were more difficult to use.


My suspicion is that we've evolved to use some form of net db storage. 
Probably one that's equivalent to a partial directed graph (i.e., some, 
but not all, node links are bidirectional).  This is probably the most 
efficient form that we know of.  It's also a quite difficult one to learn. 
But some problems can't be adequately represented by anything else. 
(N.B.:  It's possible to build a net db within a relational db...but the 
overhead will kill you.  It's also possible to build a relational db 
within a net db, but sticking the normal form discipline is nigh unto 
impossible.  That's not the natural mode for a net db.  So the Relational 
db is probably the db analog of Turing complete...but when presented with 
a problem that doesn't fit, it's also about as efficient as a Turing 
machine.  So this isn't an argument that you REALLY can't use a relational 
db for all of your representations, but rather that it's a really bad 
idea.)


Mark Waser wrote:
But how much information is in a map, and how much in the relationship 
database? Presumably you can put some v. rough figures on that for a 
given country or area. And the directions presumably cover journeys on 
roads? Or walks in any direction and between any spots too?


All of the information in the map is in the relational database because 
the actual map is produced from the database (and information doesn't 
appear from nowhere).  Or, to be clearer, almost *any* map you can buy 
today started life in a relational database.  That's how the US 
government stores it's maps.  That's how virtually all modern map 
printers store their maps because it's the most efficient way to store 
map information.


The directions don't need to assume roads.  They do so because that is 
how cars travel.  The same algorithms will handle hiking paths.  Very 
slightly different algorithms will handle off-road/off-path and will even 
take into account elevation, streams, etc. -- so, to clearly answer your 
question --  the modern map program can do everything that you can do 
with a map (and even if it couldn't, the fact that the map itself is 
produced solely from the database eliminates your original query).




- Original Message - From: Mike Tintner 
[EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, October 20, 2007 9:59 AM
Subject: Re: [agi] Human memory and number of synapses


MW: Take your own example of an outline map -- *none* of the current 
high-end
mapping services (MapQuest, Google Maps, etc) store their maps as 
images. They *all* store them symbolicly in a relational database 
because that is *the* most efficient way to store them so that they can 
produce all of the different scale maps and directions that they 
provide every day.


But how much information is in a map, and how much in the relationship 
database? Presumably you can put some v. rough figures on that for a 
given country or area. And the directions presumably cover journeys on 
roads? Or walks in any direction and between any spots too?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:

RE: [agi] An AGI Test/Prize

2007-10-20 Thread John G. Rose
Vladimir,

 

That may very well be the case and something that I'm unaware of. The system
I have in mind basically has I/O that is algebraic structures. Everything
that it deals with is modeled this way. Any sort of system that it analyzes
it converts to a particular structure that represents the data. All of its
internal mechanisms are mathematically abstracted out - except for ancillary
hard coded out of band assistors, AI, statistics, database, etc. The idea
is to have a system that can understand systems and generate systems
specifically. If you need to model a boolean based space for some sort of
sampled data world it sees and correlates to that, the thing would
generate a boolean algebra modeled and represented onto that informational
structure for that particular space instance being studied. For example
electronics theory - it would need to model that world as an instance
based on electronics descriptor items and operators in that particular world
or space set. Electronics theory world could be spat out as something
very minor that it understands.

 

Not sure if my terminology is very standard but do you understand the
thinking? It may very well be morphic to other AGI structures or theories I
don't know but I kind of like the way it works represented as such because
it seems simple and not messy but very comprehensive and has other good
qualities.

 

John 

 

 

From: Vladimir Nesov [mailto:[EMAIL PROTECTED] 



John,

It doesn't really help in understanding how system described by such terms
is related to implementation of AGI. It sounds pretty much like I use a
Turing Machine, but with more exotic equivalent. If you could be more
specific, it'd be interesting to have at least a rough picture of what your
approach is about. 




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55824739-ba7a29

Re: [agi] Human memory and number of synapses.. P.S.

2007-10-20 Thread Mark Waser
Anyway there's low resolution, possibly unconfirmed, evidence that when we 
visualize images, we generate a cell activation pattern within the visual 
cortex that has an activation boundary approximating in shape the object 
being visualized.  (This doesn't say anything about how the information is 
stored.)


Or, in other words, the brain uses a three-dimensional *spatial* model of 
the object in question -- and certainly not a two-dimensional image.


This goes back to the previous visual vs. spatial argument with the built-in 
human bias towards our primary sense.  Heck, look at the word visualize.  Do 
dolphins visualize or sonarize?  In either case, what the brain is doing is 
creating a three-dimensional model of perceived reality -- and trivializing 
it by calling it an image is a really bad idea.


- Original Message - 
From: Charles D Hixson [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, October 20, 2007 6:49 PM
Subject: Re: [agi] Human memory and number of synapses.. P.S.



FWIW:
A few years (decades?) ago some researchers took PET scans of people who 
were imagining a rectangle rotating (in 3-space, as I remember).  They 
naturally didn't get much detail, but what they got was consistent with 
people applying a rotation algorithm within the visual cortex.  This 
matches my internal reporting of what happens.


Parallel processors optimize things differently than serial processors, 
and this wasn't a stored image.  But it was consistent with an array of 
cells laid out in a rectangle activating, and having that activation 
precess as the image was visualized to rotate.
Well, the detail wasn't great, and I never heard that it went anywhere 
after the initial results.  (Somebody probably got a doctorate...and 
possibly left to work elsewhere.)  But it was briefly written up in the 
popular science media (New Scientist? Brain-Mind Bulletin?)
Anyway there's low resolution, possibly unconfirmed, evidence that when we 
visualize images, we generate a cell activation pattern within the visual 
cortex that has an activation boundary approximating in shape the object 
being visualized.  (This doesn't say anything about how the information is 
stored.)



Mark Waser wrote:
Another way of putting my question/ point is that a picture (or map) of 
your face is surely a more efficient, informational way to store your 
face than any set of symbols - especially if a doctor wants to do 
plastic surgery on it, or someone wants to use it for any design purpose 
whatsoever?


No, actually, most plastic surgery planning programs map your face as a 
limited set of three dimensional points, not an image.  This allows for 
rotation and all sorts of useful things.  And guess where they store this 
data . . . . a relational database -- just like any other CAD program.


Images are *not* an efficient way to store data.  Unless they are 
three-dimensional images, they lack data.  Normally, they include a lot 
of unnecessary or redundant data.  It is very, very rare that a computer 
stores any but the smallest image without compressing it.  And remember, 
an image can be stored as symbols in a relational database very easily as 
a set of x-coords, y-coords, and colors.


You're stuck on a crackpot idea with no proof and plenty of 
counter-examples.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55854109-5699c6


RE: [agi] An AGI Test/Prize

2007-10-20 Thread John G. Rose
Hi Edward,

 

I don't see any problems dealing with either discrete or continuous. In fact
in some ways it'd be nice to eliminate discrete and just operate in
continuous mode. But discrete maps very well with binary computers.
Continuous is just a lot of discrete, the density depending on resources or
defined as ranges in sets, other descriptors, etc. different ways.

 

I'm not really well versed on NARS and Novamente so can't comment on them
and they are light years down the road. They are basically in implementation
stage, closer to realized utility, more than just theories.

 

Oh those 55(80),000 lines of code are an AI product I am making so it is not
AGI but the thing has basically stubs for AGI or could be used by AGI.

 

But the methodology I am talking about seems to be very well workable with
data from the real world. It's hard for me to find things that it doesn't
work with although real tests need to be performed. BTW this type of
thinking I'm sure is well analyzed by many abstract algebra mathematicians.
Computability issues exist and these may make the theory not workable to a
certain degree. I actually don't know enough about a lot of this math to
really work it through deeply for a feasibility study (yet) and much of it
is still up in the air. 

 

John

 

 

 

What I found interesting is that, described at this very general level, what
this is saying is actually related to my view of AGI, except that it appears
to be based on a totally crisp, 1 or 0 view of the world.  If that is
correct, it may be very valuable in certain domains, with are themselves
totally or almost totally crisp, but it won't work for most human-like
thinking, because most human concepts and what they describe in the real
world are not crisp.

 

THAT IS, UNLESS, YOU PLAN TO MODEL CONCEPTUAL FLUIDITY, ITSELF, IN A TOTALLY
CRISP, UNCERTAINTY-BASED, WAY, which is obviously doable at some level.  I
guess that is what you are referring to by saying our mind does crisp
thinking all the time.  Even most of us anti-crispies, plan to implement our
fluid system on digital machinery using binary representation, which we hope
will be crisp (but at the 22nm node it might be a little less than totally
crisp.)

 

But the issue is: do your crisp techniques efficiently learn and represent
the fluidity of mental concepts, the non-literal similarity, and the many
apparent contradictions, and the uncertainty that dominate in human thinking
and sensory information about the real world?

 

And if so, how is your approach different than that of the Novamente/Pei
Wang-like approaches?

 

And if so, how well are your (was it) 80,000 lines of code of working at
actually representing and making sense of the shadows projected on the walls
of your AGI's cave by sensations (or data) from the real world.

 

Ed Porter,

 

P.S. Re CA:  maybe I am well versed in them but I don't know what the
acronym stands for.  If it wouldn't be too much trouble could you please
educate me on the subject?

 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56005098-c2de21

RE: [agi] An AGI Test/Prize

2007-10-20 Thread Edward W. Porter
So, do you or don't you model uncertainty, contradictory evidence, degree
of similarity, and all those good things?

And what is a CA, or don't i want to know?



Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 20, 2007 10:39 PM
To: agi@v2.listbox.com
Subject: RE: [agi] An AGI Test/Prize



Hi Edward,



I don’t see any problems dealing with either discrete or continuous. In
fact in some ways it’d be nice to eliminate discrete and just operate in
continuous mode. But discrete maps very well with binary computers.
Continuous is just a lot of discrete, the density depending on resources
or defined as ranges in sets, other descriptors, etc. different ways.



I’m not really well versed on NARS and Novamente so can’t comment on them
and they are light years down the road. They are basically in
implementation stage, closer to realized utility, more than just theories.



Oh those 55(80),000 lines of code are an AI product I am making so it is
not AGI but the thing has basically stubs for AGI or could be used by AGI.



But the methodology I am talking about seems to be very well workable with
data from the real world. It’s hard for me to find things that it doesn’t
work with although real tests need to be performed. BTW this type of
thinking I’m sure is well analyzed by many abstract algebra
mathematicians. Computability issues exist and these may make the theory
not workable to a certain degree. I actually don’t know enough about a lot
of this math to really work it through deeply for a feasibility study
(yet) and much of it is still up in the air…



John







What I found interesting is that, described at this very general level,
what this is saying is actually related to my view of AGI, except that it
appears to be based on a totally crisp, 1 or 0 view of the world.  If that
is correct, it may be very valuable in certain domains, with are
themselves totally or almost totally crisp, but it won’t work for most
human-like thinking, because most human concepts and what they describe in
the real world are not crisp.



THAT IS, UNLESS, YOU PLAN TO MODEL CONCEPTUAL FLUIDITY, ITSELF, IN A
TOTALLY CRISP, UNCERTAINTY-BASED, WAY, which is obviously doable at some
level.  I guess that is what you are referring to by saying our mind does
crisp thinking all the time.  Even most of us anti-crispies, plan to
implement our fluid system on digital machinery using binary
representation, which we hope will be crisp (but at the 22nm node it might
be a little less than totally crisp.)



But the issue is: do your crisp techniques efficiently learn and represent
the fluidity of mental concepts, the non-literal similarity, and the many
apparent contradictions, and the uncertainty that dominate in human
thinking and sensory information about the real world?



And if so, how is your approach different than that of the Novamente/Pei
Wang-like approaches?



And if so, how well are your (was it) 80,000 lines of code of working at
actually representing and making sense of the shadows projected on the
walls of your AGI’s cave by sensations (or data) from the “real” world.



Ed Porter,



P.S. Re “CA”:  maybe I am well versed in them but I don’t know what the
acronym stands for.  If it wouldn’t be too much trouble could you please
educate me on the subject?



  _

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
http://v2.listbox.com/member/?;
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56007871-ae3472