Re: [agi] Human memory and number of synapses

2007-10-22 Thread Russell Wallace
On 10/20/07, Matt Mahoney [EMAIL PROTECTED] wrote:
[most of post snipped and agreed with]

 Without a number, you could argue that the vast majority of synapses store
 subconscious (non recallable) memories.  But I can still argue otherwise.
 Humans are not significantly superior to other large animals with smaller
 brains (such as a bear or a deer) in skills that don't involve language, such
 as running over rough terrain or discriminating various plants and animals.

As I understand it, this is not the case.

Tests of throwing accuracy have put chimpanzees' typical error in feet
in the same ballpark as humans' typical error in inches. (Some of this
is mechanical - the arrangement of bones and muscles in the human arm
trades off some strength for accuracy - but some of it is neural.)

Humans distinguish a larger number of food and similar non-food plants
and animals than any other species.

Humans recognize a larger number of individuals in a social context
than any other species. I don't have a reference handy, but someone
once plotted a graph of brain size versus number of individuals
recognized for various social animals - and found humans fall about
where you'd expect on the graph given our brain size.

Fossil evidence suggests the expansion of brain size in our ancestral
line roughly coincides with toolmaking. Spoken language doesn't
fossilize, so we're somewhat in the realm of conjecture here, but it
has been at least plausibly reckoned that language came later.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56204583-b479f2


[agi] Re: Bogus Neuroscience

2007-10-22 Thread Mark Waser
If I see garbage being peddled as if it were science, I will call it 
garbage.


Amen.  The political correctness of forgiving people for espousing total 
BS is the primary cause of many egregious things going on for far, *far* too 
long.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56240391-7b4448


RE: [agi] An AGI Test/Prize

2007-10-22 Thread John G. Rose
Ben,

 

That is sort of a neat kind of device. Will have to think about that as it
is fairly dynamic I may have to look that one up and potentially experiment
on it.

 

The kinds of algebraic structures I'm talking about basically are as many as
possible.  Also things like sets w/o operators, and things not classified as
algebraic but related. I can talk generally about this and then maybe
specifics. The idea is that - let's say you are a developer and are writing
say a web server. How do you go about it? First thing you do is scrounge the
internet for snippets and source code, libraries, specs, etc..  The AGI
I'm talking about is approached the same way cept' you scrounge mathematics
publications generally dealing with abstract algebras. To start off though
as there are hundreds of years of code snippets with proofs BTW but we
start with simple stuff - groups, rings, fields, algebras,  groupoids, etc.
including sub-chunks and twists of these things. Sticking with discrete for
starters except for some continuous here and there.

 

One might ask why do it this way? The idea is that the framework is
elaborate, universal, super powerful construct - basically all abstract math
- defined by man cumulative over time, grounded in rigorous proofs and
absolutes. The goal is to get everything into it meaning all data input is
analyzed for algebraic structure and put into the thing. It's an algebraic
superhighway mesh highly dense -yes you have to emulate it on digital
computers - go from infinite algebraic mesh to physical real digital subset
emulated BUT that's kind of what our brains do. We happen to live in (at
least from day to day perspective) a very finite resource world. I'd like to
delve deeper into digital physics but will not here J

 

So there is a little background. All we are talking about is math and data
and computer. So getting stuff into it? Think about it this way - built in
lossy compression. Yes you have sensory memory duration gradations, example:
photographic to skeletoid, but to get the algebraic structure is where the
AI and stats tools get used. You can imagine how that works - but the goal
is algebraic structure especially operators, magma detection, - imagine
example a dog running look at all the cyclic groups going on - symmetry,
sets, these are signatures, motion operators - subgroups of bodily movement
definitions sampled is behavioral display, then put the dog into memory -
morphisms storage - all dogs ever seen -think of a telescoping morphism tree
index like structure. The AGI internals include morphism and functor
networks kind of like analogy tree nets. Subgroups, subfields, etc. are very
important as you leverage their structure defined onto their instance
representations -

 

Linguistic semantics? Same way. The AI and stats sensory has to break it up
into algebraic structure. You need complexity detection. A view of a
mountain and a view of a page of text have different complexity signatures.
It detects text. The gradation from image to algebraic structure - the
exploded text - sets and operators - processed according to its complexity
sig, rips it apart put into the algebraic text structure mesh memory of
built in telescoping morphism tree (or basically mossy or wormy structures
at this point from a dimensional cross section view). The linguistic text
structure is hierarchies of intersecting subsets and subgroups with morphic
relational trees intersecting with cyclic group and subgroup indexors, etc..
tied into the KB through, once again algebraic structure. Knowledge is very
compressed and cyclic group centric (seems like especially physicl world
knowledge)- it sort of collapses with a self-organizing effect as more data
is added where memories can be peeled off.

 

Anyway, kind of understand where it's headed? 

 

John

 

 

From: Benjamin Goertzel [mailto:[EMAIL PROTECTED] 

John Rose,

As a long-lapsed mathematician, I'm curious about your system, but what
you've said about it so far doesn't really tell me much...

Do you have a mathematical description of your system? 

I did some theoretical work years ago representing complex systems dynamics
in terms of abstract algebras.  What I showed there was that you could
represent a certain kind of multi-component system, with complex
inter-component interactions, in such a way that its dynamic evolution over
time is equivalent to the iteration of a quadratic function in a
high-dimensional space with an eccentric multiplication table on it.  The
multiplication table basically encodes information of the form 

(component i) acts_on (component j) to produce (component k)

where acts_on is the mult. operator  So then complex systems dynamics
all comes down to Julia sets and Mandelbrot sets on high-dimensional real
algebras ;-) 

I never ended up making any use of this direction of thinking, but I found
it interesting...

This stuff made it into my 1997 book From Complexity to Creativity I
believe...

I am curious what 

Re: [agi] Re: Bogus Neuroscience

2007-10-22 Thread Mark Waser
 True enough, but Granger's work is NOT total BS... just partial BS ;-)

In which case, clearly praise the good stuff but just as clearly (or even more 
so) oppose the BS.

You and Richard seem to be in vehement agreement.  Granger knows his neurology 
and probably his neuroscience (depending upon where you draw the line) but his 
link of neuroscience to cognitive science is not only wildly speculative but 
clearly amateurish and lacking the necessary solid grounding in the latter 
field.

I'm not quite sure why you always hammer Richard for pointing this out.  He 
does have his agenda to stamp out bad science (which I endorse fully) but he 
does tend to praise the good science (even if more faintly) as well.  Your 
hammering of Richard often appears as a strawman to me since I know that you 
know that Richard doesn't dismiss these people's good neurology -- just their 
bad cog sci.  And I really am not seeing any difference between what I 
understand as your opinion and what I understand as his. 


  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, October 22, 2007 8:00 AM
  Subject: Re: [agi] Re: Bogus Neuroscience





  On 10/22/07, Mark Waser [EMAIL PROTECTED] wrote:
 If I see garbage being peddled as if it were science, I will call it
 garbage.

Amen.  The political correctness of forgiving people for espousing total
BS is the primary cause of many egregious things going on for far, *far* 
too 
long.

  True enough, but Granger's work is NOT total BS... just partial BS ;-)

  I felt his discussion of the details by which the basal ganglia may serve as a
  reward mechanism added something to prior papers I'd read on the topic.  
Admittedly 
  our knowledge of this neural reward mechanism is still way too crude to yield 
any
  insights regarding AGI, but, it's still interesting.

  On the other hand, his simplified thalamocortical core and matrix 
algorithms are 
  way too simplified for me.  They seem to sidestep the whole issue of complex
  nonlinear dynamics and the formation of strange attractors or transients.  
I.e., even
  if the basic idea he has is right, in which thalamocortical loops mediate the 
formation 
  of semantically meaningful activation-patterns in the cortex, his 
characterization of
  these patterns in terms of categories and subcategories and so forth can at 
best
  only be applicable to a small subset of examples of cortical function  
The difference 
  between the simplified thalamocortical algorithms he presents and the real 
ones seems 
  to me to be the nonlinear dynamics that give rise to intelligence ;-) .. 

  And this is what
  leads me to be extremely skeptical of his speculative treatment of linguistic 
grammar 
  learning within his framework.  I think he's looking for grammatical 
structure to be
  represented at the wrong level in his network... at the level of individual 
activation-patterns
  rather than at the level of the emergent structure of activation-patterns 
 Because his 
  simplified version of the thalamocortical loop is too simplified to give rise 
to nonlinear
  dynamics that display subtly patterned emergent structures...

  -- Ben G

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56245822-75b432

Re: [agi] Re: Bogus Neuroscience

2007-10-22 Thread Benjamin Goertzel

   And I really am not seeing any difference between what I understand as
 your opinion and what I understand as his.


Sorry if I seemed to be hammering on anyone, it wasn't my intention.
(Yesterday was a sort of bad day for me for non-science-related reasons, so
my tone of e-voice was likely off a bit ...)

I think the difference between my and Richard's views on Granger would
likely be best summarized by saying that

-- I think Granger's cog-sci speculations, while oversimplified and surely
wrong in parts, contain important hints at the truth (and in my prior email
I tried to indicate how)

-- Richard OTOH, seems to consider Granger's cog-sci speculations total
garbage

This is a significant difference of opinion, no?

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56269619-052656

Re: [agi] An AGI Test/Prize

2007-10-22 Thread J Storrs Hall, PhD
On Monday 22 October 2007 08:05:26 am, Benjamin Goertzel wrote:
 ... but dynamic long-term memory, in my view, is a wildly
 self-organizing mess, and would best be modeled algebraically as a quadratic
 iteration over a high-dimensional real non-division algebra whose
 multiplication table is evolving dynamically as the iteration proceeds

Holy writhing Mandelbrot sets, Batman!

Why real and non-division? I particularly don't like real -- my computer can't 
handle the precision :-)

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56270025-9c1ac7


[agi] Re: Bogus Neuroscience [...]

2007-10-22 Thread A. T. Murray
On Oct 21, 2007, at 6:47 PM, J. Andrew Rogers wote:

On Oct 21, 2007, at 6:37 PM, Richard Loosemore wrote:
 It took me at least five years of struggle to get to the point  
 where I could start to have the confidence to call a spade a spade


It still looks like a shovel to me.

In what looks not like a spade or a shovel but like
CENSORSHIP -- my message below was in response to

http://www.mail-archive.com/agi@v2.listbox.com/msg07943.html

Date: Fri, 19 Oct 2007 06:18:27 -0700 (PDT)
From: [EMAIL PROTECTED] (A. T. Murray)
Subject: Re: [agi] More public awarenesss that AGI is coming fast
To: agi@v2.listbox.com
Reply-To: agi@v2.listbox.com
 
 
J. Andrew Rogers wrote:
 [...]
 There is enough VC money for everyone with
 a decent business model. Honestly, most AGI
 is not a decent business model.
 
Neither is philosophy, but philosophy prevails.
 
 Otherwise Mentifex would be smothered in cash.
 It might even keep him quiet.
 
I don't need cash beyond the exigencies of daily living.
Right now I'm going to respond off the top of my head
with the rather promising latest news from Mentifex AI.
 
ATM/Mentifex here fleshed out the initial Wikipedia stub of
http://en.wikipedia.org/wiki/Modularity_of_Mind
several years ago. M*ntifex-bashers came in and
rewrote it, but traces of my text linger still.
(And I have personally met Jerry Fodor years ago.)
 
Then for several years I kept the Modularity link
on dozens of mind-module webpages as a point of
departure into Wikipedia. Hordes of Wikpedia
editors worked over and over again on the
Modularity-of-mind article.
 
At the start of September 2007 I decided to
flesh out the Wikipedia connection for each
Mentifex AI mind-module webpage by expanding
from that single link to a cluster of all
discernible Wikipedia articles closely related
to the topic of my roughly forty mind-modules.
 
http://www.advogato.org/article/946.html
is where on 11 September 2007 I posted
Wikipedia-based Open-Source Artificial Intelligence
-- because I realized that I could piggyback
my independent-scholar AI project on Wikipedia
as a growing source of explanatory AI material.
 
http://tech.groups.yahoo.com/group/aima-talk/message/784
is where I suggested (and I quote a few lines):
 It would be nice if future editions of the AIMA textbook
 were to include some treatment of the various independent
 AI projects that are out there (on the fringe?) nowadays.
 
Thereupon another discussant provided a link to
http://textbookrevolution.org -- a site which
immediately accepted my submission of
http://mind.sourceforge.net/aisteps.html as
Artificial Intelligence Wikipedia-based Free Textbook.
 
So fortuitously, serendipitously the whole direction
of Mentifex AI changed direction in mere weeks.
 
http://AIMind-I.com is an example not only of
a separate AI spawned from Mentifex AI, but also
of why I do not need massive inputs of VC cash,
when other AI devotees just as dedicated as I am
will launch their own mentifex-class AI Mind
project using their own personal resources.
 
Now hear this. The Site Meter logs show that
interested parties from all over the world
are looking at the Mentifex offer of a free
AI textbook based on AI4U + updates + Wikipedia.
 
Mentifex AI is in it for the long haul now.
Not only here in America, but especially
overseas and in third world countries
there are AI-hungry programmers with
unlimited AGI ambition but scant cash.
They are the beneficiaries of Mentifex AI.
 
Arthur
--
http://mentifex.virtualentity.com 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56273452-4f8ff3


Re: [agi] An AGI Test/Prize

2007-10-22 Thread Benjamin Goertzel
On 10/22/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

 On Monday 22 October 2007 08:05:26 am, Benjamin Goertzel wrote:
  ... but dynamic long-term memory, in my view, is a wildly
  self-organizing mess, and would best be modeled algebraically as a
 quadratic
  iteration over a high-dimensional real non-division algebra whose
  multiplication table is evolving dynamically as the iteration
 proceeds

 Holy writhing Mandelbrot sets, Batman!

 Why real and non-division? I particularly don't like real -- my computer
 can't
 handle the precision :-)


You need to get the new NVidia AIXI chip ... it's a bargain at $infinity.99
;-)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56281147-0ed02b

Re: [agi] Re: Bogus Neuroscience

2007-10-22 Thread Richard Loosemore

Mark Waser wrote:

  True enough, but Granger's work is NOT total BS... just partial BS ;-)
In which case, clearly praise the good stuff but just as clearly (or 
even more so) oppose the BS.
 
You and Richard seem to be in vehement agreement.  Granger knows his 
neurology and probably his neuroscience (depending upon where you draw 
the line) but his link of neuroscience to cognitive science is not only 
wildly speculative but clearly amateurish and lacking the necessary 
solid grounding in the latter field.
 
I'm not quite sure why you always hammer Richard for pointing this out.  
He does have his agenda to stamp out bad science (which I endorse 
fully) but he does tend to praise the good science (even if more 
faintly) as well.  Your hammering of Richard often appears as a strawman 
to me since I know that you know that Richard doesn't dismiss these 
people's good neurology -- just their bad cog sci.  And I really am not 
seeing any difference between what I understand as your opinion and what 
I understand as his. 


You know, you're right:  I do spend a lot less time praising good stuff, 
and I sometimes feel bad about that (Accentuate The Positive, and all that).


But the reason I do so much critiquing is that the AI/Cog 
Sci/Neuroscience area is so badly clogged with nonsense and what we need 
right now is for someone to start cutting down the dead wood.  We need 
to stop new people coming into the field and wasting years (or their 
entire career) reinventing wheels or trying to fix wheels that were 
already known to be broken beyond repair 30 years before they were born.


About the Granger paper, I thought last night of a concise summary of 
how bad it really is.  Imagine that we had not invented computers, but 
we were suddenly given a batch of computers by some aliens, and we tried 
to put together a science to understand how these machines worked.


Suppose, also, that these machines ran Microsoft Word and nothing else.

As scientists, we then divide into at least two camps.  The 
neuroscientists take these computers and just analyze wiring and other 
physical characteristics. After a while these folks can tell you all 
about the different bits they have named and how they are connected: 
DDR3 memory, SLI, frontside bus, water cooling, clock speeds, cache, etc 
etc etc.  Then there is another camp, the cognitive scientists who try 
to understand the Microsoft Word application running on these computers, 
without paying much attention to the hardware.


The cog sci people have struggled to make sense of Word (and still don't 
have a good theory, even today), and over the years they have embraced, 
and then rejected, several really bad theories of how Word works.  One 
of these, which was invented about 70 years ago, and discarded about 50 
years ago, was called behaviorism and it had some pretty nutty ideas 
about what was going on.  To the behaviorists, MS Word consisted of a 
huge pile of things that represented words (word-units), and the way 
the program worked was that the word-units just had an activation level 
that went up if there were more instances of that word in a document, or 
if the word was in a bigger font, or in bold or italic.  And there were 
links between the word-units called associations.  The behaviorists 
seriously believed that they could explain all of MS Word this way, but 
today we consider this theory to have been stupidly simplistic, and we 
have far for subtle, complex ideas about what is going on.


What was so bad about the behaviorist theory?  Many, many things, but 
take a look at one of them: it just cannot handle the instance-generic 
distinction (aka the type-token distinction).  It cannot represent 
individual instances of words in the document.  If the word the 
appears a hundred times, that just makes the word-unit for the so much 
stronger, that's all.  It really doesn't take a rocket scientist to tell 
you that that is a big, fat problem.


The one virtue of behaviorism is that amateurs can pick up the talk 
pretty quickly, and if they don't know all the ridiculous limitations 
and faults of behaviorism, they can even convince themselves that this 
is the beginnings of a workable theory of intelligence.


So now, along comes a neuroscientist (Granger, although he is only one 
of many) and he writes a paper that is filled with 95% talk about wires 
and busses and caches and connections  and then here and there he 
inserts statements out of the blue that purport to be a description of 
things going on at the Microsoft Word level (and indeed the whole paper 
is supposed to be about finding the fundamental circuit components that 
explain Microsoft Word).  Only problem is that whenever he suddenly 
inserts a few sentences of Microsoft Word talk, it is just a vague 
reference to how the circuitry can explain the things going on in what 
sounds like a *behaviorist* theory!  His statements look wildly out of 
place:  its all SLI bus connects with a 

RE: [agi] An AGI Test/Prize

2007-10-22 Thread John G. Rose
 Holy writhing Mandelbrot sets, Batman!
 
 Why real and non-division? I particularly don't like real -- my computer
 can't
 handle the precision :-)

Robin - forget all this digital stuff it's a trap, we need some analog 
nano-computers to help fight these crispy impostors!

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56310633-8760ea

Re: [agi] Re: Bogus Neuroscience

2007-10-22 Thread Mark Waser
 -- I think Granger's cog-sci speculations, while oversimplified and surely 
 wrong in parts, contain important hints at the truth (and in my prior email 
 I tried to indicate how) 
 -- Richard OTOH, seems to consider Granger's cog-sci speculations total 
 garbage
 This is a significant difference of opinion, no?

As you've just stated it, yes.  However, rereading your previous e-mail, I 
still don't really see where you agree with his cog sci (as opposed to what I 
would still call neurobiology which I did see you agreeing with).


  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, October 22, 2007 10:26 AM
  Subject: Re: [agi] Re: Bogus Neuroscience


  And I really am not seeing any difference between what I understand as 
your opinion and what I understand as his. 


  Sorry if I seemed to be hammering on anyone, it wasn't my intention. 
(Yesterday was a sort of bad day for me for non-science-related reasons, so my 
tone of e-voice was likely off a bit ...) 

  I think the difference between my and Richard's views on Granger would likely 
be best summarized by saying that

  -- I think Granger's cog-sci speculations, while oversimplified and surely 
wrong in parts, contain important hints at the truth (and in my prior email I 
tried to indicate how) 

  -- Richard OTOH, seems to consider Granger's cog-sci speculations total 
garbage

  This is a significant difference of opinion, no?

  -- Ben

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56325849-3cdbfb

Re: [agi] Re: Bogus Neuroscience [...]

2007-10-22 Thread Mark Waser

Arthur,

   There was no censorship.  We all saw that message go by.  We all just 
ignored it.  Take a hint.


- Original Message - 
From: A. T. Murray [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, October 22, 2007 10:35 AM
Subject: [agi] Re: Bogus Neuroscience [...]



On Oct 21, 2007, at 6:47 PM, J. Andrew Rogers wote:


On Oct 21, 2007, at 6:37 PM, Richard Loosemore wrote:

It took me at least five years of struggle to get to the point
where I could start to have the confidence to call a spade a spade



It still looks like a shovel to me.


In what looks not like a spade or a shovel but like
CENSORSHIP -- my message below was in response to

http://www.mail-archive.com/agi@v2.listbox.com/msg07943.html

Date: Fri, 19 Oct 2007 06:18:27 -0700 (PDT)
From: [EMAIL PROTECTED] (A. T. Murray)
Subject: Re: [agi] More public awarenesss that AGI is coming fast
To: agi@v2.listbox.com
Reply-To: agi@v2.listbox.com


J. Andrew Rogers wrote:

[...]
There is enough VC money for everyone with
a decent business model. Honestly, most AGI
is not a decent business model.


Neither is philosophy, but philosophy prevails.


Otherwise Mentifex would be smothered in cash.
It might even keep him quiet.


I don't need cash beyond the exigencies of daily living.
Right now I'm going to respond off the top of my head
with the rather promising latest news from Mentifex AI.

ATM/Mentifex here fleshed out the initial Wikipedia stub of
http://en.wikipedia.org/wiki/Modularity_of_Mind
several years ago. M*ntifex-bashers came in and
rewrote it, but traces of my text linger still.
(And I have personally met Jerry Fodor years ago.)

Then for several years I kept the Modularity link
on dozens of mind-module webpages as a point of
departure into Wikipedia. Hordes of Wikpedia
editors worked over and over again on the
Modularity-of-mind article.

At the start of September 2007 I decided to
flesh out the Wikipedia connection for each
Mentifex AI mind-module webpage by expanding
from that single link to a cluster of all
discernible Wikipedia articles closely related
to the topic of my roughly forty mind-modules.

http://www.advogato.org/article/946.html
is where on 11 September 2007 I posted
Wikipedia-based Open-Source Artificial Intelligence
-- because I realized that I could piggyback
my independent-scholar AI project on Wikipedia
as a growing source of explanatory AI material.

http://tech.groups.yahoo.com/group/aima-talk/message/784
is where I suggested (and I quote a few lines):

It would be nice if future editions of the AIMA textbook
were to include some treatment of the various independent
AI projects that are out there (on the fringe?) nowadays.


Thereupon another discussant provided a link to
http://textbookrevolution.org -- a site which
immediately accepted my submission of
http://mind.sourceforge.net/aisteps.html as
Artificial Intelligence Wikipedia-based Free Textbook.

So fortuitously, serendipitously the whole direction
of Mentifex AI changed direction in mere weeks.

http://AIMind-I.com is an example not only of
a separate AI spawned from Mentifex AI, but also
of why I do not need massive inputs of VC cash,
when other AI devotees just as dedicated as I am
will launch their own mentifex-class AI Mind
project using their own personal resources.

Now hear this. The Site Meter logs show that
interested parties from all over the world
are looking at the Mentifex offer of a free
AI textbook based on AI4U + updates + Wikipedia.

Mentifex AI is in it for the long haul now.
Not only here in America, but especially
overseas and in third world countries
there are AI-hungry programmers with
unlimited AGI ambition but scant cash.
They are the beneficiaries of Mentifex AI.

Arthur
--
http://mentifex.virtualentity.com

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56326072-faf52d


Re: [agi] An AGI Test/Prize

2007-10-22 Thread Richard Loosemore

Benjamin Goertzel wrote:



On 10/22/07, *J Storrs Hall, PhD* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


On Monday 22 October 2007 08:05:26 am, Benjamin Goertzel wrote:
  ... but dynamic long-term memory, in my view, is a wildly
  self-organizing mess, and would best be modeled algebraically as
a quadratic
  iteration over a high-dimensional real non-division algebra whose
  multiplication table is evolving dynamically as the iteration
proceeds

Holy writhing Mandelbrot sets, Batman!

Why real and non-division? I particularly don't like real -- my
computer can't
handle the precision :-)


You need to get the new NVidia AIXI chip ... it's a bargain at 
$infinity.99  ;-)


Oh, it's not the price for the NVidia AIXI chip that bothers me, its the 
delivery:  Amazon say that orders will be shipped when Hell reaches Zero 
Degrees Kelvin.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56326627-3df523


Re: [agi] Re: Bogus Neuroscience

2007-10-22 Thread Benjamin Goertzel


 About the Granger paper, I thought last night of a concise summary of
 how bad it really is.  Imagine that we had not invented computers, but
 we were suddenly given a batch of computers by some aliens, and we tried
 to put together a science to understand how these machines worked.

 Suppose, also, that these machines ran Microsoft Word and nothing else.



Amusingly, I used a very similar metaphor in a newspaper article I wrote
about the
Human Genome Project, back in 2001 (it appeared in the German paper
Frankfurter
Allgemaine Zeitung)

http://www.goertzel.org/benzine/dna.htm


Consider a large computer program such as Microsoft Windows.  This program
is produced via a long series of steps.  First, a team of programmers
produces some program code, in a programming language (in the case of
Microsoft Windows, the programming language is C++, with a small amount of
assembly language added in).  Then, a compiler acts on this program code,
producing an executable file – the actual program that we run, and think of
as Microsoft Windows.  Just as with human beings, we have some code, and we
have a complex entity created by the code, and the two are very different
things.   Mediating between the code and the product is a complex process –
in the case of Windows, the C++ compiler; in the case of human beings, the
whole embryological and epigenetic biochemical process, by which DNA grows
into a human infant.

Now, imagine a Windows Genome Project, aimed at identifying every last bit
and byte in the C++ source code of Microsoft Windows.   Suppose the
researchers involved in the Windows Genome Project managed to identify the
entire source code, within 99% accuracy.   What would this mean for the
science of Microsoft Windows?

 Well, it could mean two different things.

 Option 1: If they knew how the C++ compiler worked, then they'd be home
free!  They'd know how to build Microsoft Windows!

 Option 2: On the other hand, what if they not only had no idea how to build
a C++ compiler, but also had no idea what the utterances in the C++
programming language meant?  In other words, they had mapped out the bits
and bytes in the Windows Genome,  the C++ source code of Windows, but it was
all a bunch of gobbledygook to them.   All they have a is a large number of
files of C++ source code, each of which is a nonsense series of characters.
Perhaps they recognized some patterns: older versions of Windows tend to be
different in lines 1000-1500 of this particular file.  When file X is
different between one Windows version and another, this other file tends to
also be different between the two versions.   This line of code seems to
have some effect on how the system outputs information to the screen.  Et
cetera.

 Our situation with the Human Genome Project is much more like Option 2 than
it is like Option 1.


--  Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56327957-1b80ae

Re: [agi] Re: Bogus Neuroscience

2007-10-22 Thread Benjamin Goertzel
On 10/22/07, Mark Waser [EMAIL PROTECTED] wrote:

   -- I think Granger's cog-sci speculations, while oversimplified and
 surely wrong in parts, contain important hints at the truth (and in my prior
 email I tried to indicate how)
  -- Richard OTOH, seems to consider Granger's cog-sci speculations total
 garbage
  This is a significant difference of opinion, no?

 As you've just stated it, yes.  However, rereading your previous e-mail, I
 still don't really see where you agree with his cog sci (as opposed to what
 I would still call neurobiology which I did see you agreeing with).



It's of course quite non-obvious where to draw the line between neuroscience
and cognitive science, in a context like this.

However, what I like in Granger paper, that seems cog-sci-ish to me, is the
idea that functionalities like

-- hierarchical clustering
-- hash coding
-- sequence completion

are provided as part of the neurological instruction set

The attractive cog-sci hypothesis here, as I might reformulate it, is that
higher-level cognitive procedures could palpably take these functionalities
as primitives, sort of as if they were library functions provided by the
brain

So, one way to summarize my view of the paper is
-- The neuroscience part of Granger's paper tells how these
library-functions may be implemented in the brain
-- The cog-sci part consists partly of
- a) the hypothesis that these library-functions are available to
cognitive programs
- b) some specifics about how these library-functions may be used within
cognitive programs

I find Granger's idea a) quite appealing, but his ideas in category b)
fairly uncompelling and oversimplified.

Whereas according to my understanding, Richard seems not to share my belief
in the strong potential meaningfulness of a)

All this is indirectly and conceptually relevant to Novamente because we
have to make decisions regarding which functionalities to supply as
primitives to Novamente, and which functionalities to require it to learn...

However, the cognitive theory underlying NM is totally different than, and
much more complex than, Granger's overall cognitive theory...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56329795-c7f0d9

Re: [agi] Re: Bogus Neuroscience

2007-10-22 Thread Mark Waser
 So, one way to summarize my view of the paper is
 -- The neuroscience part of Granger's paper tells how these 
 library-functions may be implemented in the brain
 -- The cog-sci part consists partly of
 - a) the hypothesis that these library-functions are available to 
 cognitive programs 
 - b) some specifics about how these library-functions may be used within 
 cognitive programs
 I find Granger's idea a) quite appealing, but his ideas in category b) 
 fairly uncompelling and oversimplified. 
 Whereas according to my understanding, Richard seems not to share my belief 
 in the strong potential meaningfulness of a)

*Everyone* is looking for how library functions may be implemented precisely 
because they would then *assume* that the library functions would then be 
available to thought -- thus a) is not at all unique to Granger and I would 
even go so far as to not call it a hypothesis.

And I'm also pretty sure that *everyone* believes in the strong potential 
meaningfulness of having library functions.

Granger has nothing new in cog sci except some of the particular details in b) 
-- which you find uncompelling and oversimplified -- so what is the cog sci 
that you find of value?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56335298-578a1a

Re: [agi] Re: Bogus Neuroscience

2007-10-22 Thread Benjamin Goertzel


 Granger has nothing new in cog sci except some of the particular details
 in b) -- which you find uncompelling and oversimplified -- so what is the
 cog sci that you find of value?
 --



Apparently we are using cog sci in slightly different ways...

I agree that he has nothing new and useful to say (in that paper) in cog
psych

However, he has some interesting ideas about the connections between
cognitive primitives and neurological structures/dynamics.  Connections of
this nature are IMO cog sci rather than just neurosci.  At least, that
is consistent with how the term cog sci was used when I was a cog sci
professor, back in the day...

Also, as my knowledge of the cog-sci and neurosci literature is not
comprehensive, I can't always tell when an idea of Granger's is novel
whereas when he's just clearly articulating something that was implicit in
the literature beforehand but perhaps not so clearly expressed.  Analogously
I know Jeff Hawkins has gotten a lot of mileage out of clearly-expressed
articulations of ideas that are pretty much common lore among
neurobiologists (though Hawkins does have some original suggestions as
well...)

(To a significant extent, Granger's articles just summarize ideas from
other, more fine-grained papers.  This does not make them worthless,
however.  In bio-related fields I find summary-type articles quite valuable,
since the original research articles are often highly focused on
experimental procedures.  It's good to understand what the experimental
procedures are but I don't always want to read about them in depth,
sometimes I just want to understand the results and their likely
interpretations...)

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56337351-4ef3ca

RE: [agi] An AGI Test/Prize

2007-10-22 Thread John G. Rose
Yeah I'm not really agreeing with you here. I feel that, though I haven't
really studied other cognitive software structures, but I feel that they can
built simpler and more efficient. But I shouldn't come out saying that
unless I attack some of the details right? But that's a gut reaction I have
after working on so many large software projects.  And it does depend on the
view of cognition. Some of cognition is just hype it depends on what you are
trying to build. There are a lot of warm-fuzzies, Dr. Feelgood things going
on with cognition. I like cognition as a machine, a systematic controlled
complexity modeler, edge of chaos surfing, crystallographic, polytopical
harmonic, probabilistic sort of morphism and structure pump, with SOM
injection - yeah I want a machine that rips through the fabric of reality
mesh.

 

John

 

 

From: Benjamin Goertzel [mailto:[EMAIL PROTECTED] 




Well the problem is that branches of algebra like universal algebra and
category theory, that don't assume highly particular algebraic rules, don't
really have any deep theorems that tell you anything...

Whereas the branches of algebra that really give you deep information, all
pertain to highly specialized structures that are very unlikely to be
relevant to cognition...




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56339571-d001db

Re: [agi] Re: Bogus Neuroscience

2007-10-22 Thread Mark Waser
I think we've beaten this horse to death . . . . :-)

 However, he has some interesting ideas about the connections between 
 cognitive primitives and neurological structures/dynamics.  Connections of 
 this nature are IMO cog sci rather than just neurosci.  At least, that 
 is consistent with how the term cog sci was used when I was a cog sci 
 professor, back in the day... 

I think that most neurosci practitioners would argue with you.

 (To a significant extent, Granger's articles just summarize ideas from 
 other, more fine-grained papers.  This does not make them worthless, 
 however.  In bio-related fields I find summary-type articles quite valuable, 
 since the original research articles are often highly focused on 
 experimental procedures.  It's good to understand what the experimental 
 procedures are but I don't always want to read about them in depth, 
 sometimes I just want to understand the results and their likely 
 interpretations...) 

So what I'm getting is that you're finding his summary of the neurosci papers 
(the other, more fine-grained papers) as what is useful.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56347245-bce03f

Re: [agi] Re: Bogus Neuroscience

2007-10-22 Thread Benjamin Goertzel


 But each of these things has a huge raft of assumptions built into it:

   -- hierarchical clustering ... OF WHAT KIND OF SYMBOLS?
   -- hash coding ... OF WHAT KIND OF SYMBOLS?
   -- sequence completion ... OF WHAT KIND OF SYMBOLS?

 In each case, Granger's answer is that the symbols are vaguely
 behaviorist units playing an incredibly simplistic role in a simplistic
 system.

 If we take his claims at face value, he has found library functions that
 operate on junk that cannot possibly be symbols at a cognitive level.

 If he had simply said that he had found hiererchical clustering of
 neural signals, or hash coding of neural signals, or sequence completion
 circuits at the neural signal level, I would say good luck to him and
 keep banging the rocks together.

 But he did not:  he made claims about the cognitive level, and the only
 way those claims could be meaningful and useful would be in a cognitive
 level system that is manifestly broken.



Well, I don't fully agree with your final paragraph...

Suppose we take Greenfield's hypothesis that a fundamental role in
cognition, perception and action is played by transient neural assemblies,
that form opportunistically based on circumstance, but that are centered
around cores that are tightly-interconnected neural subnets ...

Potentially, Granger's primitive mechanisms could act on sets of neural
signals coding for these cores, which then indirectly drive the cognitive
activity that occurs mainly on the level of the transient assemblies that
the cores induce...

This is *not* what Granger says, but it seems generally plausible to me...

BTW I am curious to hear something about what you think might be
a correct cognitive theory ;-)

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56347479-f61549

Re: [agi] Re: Bogus Neuroscience

2007-10-22 Thread Benjamin Goertzel
On 10/22/07, Mark Waser [EMAIL PROTECTED] wrote:

  I think we've beaten this horse to death . . . . :-)

  However, he has some interesting ideas about the connections between
 cognitive primitives and neurological structures/dynamics.  Connections of
 this nature are IMO cog sci rather than just neurosci.  At least, that
 is consistent with how the term cog sci was used when I was a cog sci
 professor, back in the day...

 I think that most neurosci practitioners would argue with you.



Cognitive science does not equal cognitive psychology.  It's supposed to be
an integrative discipline.  When I co-founded the cog sci degree programme
at the University of Western Australia in the 90's, we included faculty from
biology, psychology, computer science, philosophy, electrical engineering,
linguistics and mathematics.


 So what I'm getting is that you're finding his summary of the neurosci
 papers (the other, more fine-grained papers) as what is useful.



I didn't read all the references, so I don't honestly know where his
summarizing of others' ideas leaves off and his own original ideas
begin  If this were my main area of research I would dig in to that
level of depth, but I've got an AGI to build ;-)

ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56348307-a7af54

Re: Bogus Neuroscience [WAS Re: [agi] Human memory and number of synapses]

2007-10-22 Thread Benjamin Goertzel

 As I said above, it leaves many things unsaid and unclear.  For example,
 does it activate all or multiple nodes in a cluster together or not?  Does
 it always activate the most general cluster covering a given pattern, or
 does it use some measure of how well a cluster fits input to select what,
 and to what degree, cluster(s) in the generalization hierarchy spreads
 its(their) activation through the matrix loop?  Is it correct to assume that
 this form of sequential spreading activation can take place between massive
 number of subconsciously activated nodes simultaneously, or is it limited to
 a relatively few, or near conscious nodes? How exactly does the model of the
 basil ganglia described in the earlier part of this paper plug into the
 operation of the core and matrix loops described in its later part.  How
 does it handle sequential activations that are feed to it in a different
 order than that originally learned.  Etc.


My hypothesis is that when the nodes in a cluster are activated, this then
leads to the recruitment of other associated nodes not in the cluster, into
a contextually-appropriate transient assembly ... and that much of what's
interesting in cognitive neurodynamics has to do with these transient
assemblies and their interactions... which of course Granger does not touch
on...

Re consciousness, I tend to agree with Greenfield's hypothesis that
wide-ranging transient neural assemblies are associated with conscious
awareness.  This harmonizes well with the hypothesis I make in the Hidden
Pattern, that the more information-theoretically intense patterns in the
brain will tend to correspond to the more subjectively intense consciousness
experiences...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56358149-2d450f

FW: Bogus Neuroscience [WAS Re: [agi] Human memory and number of synapses]

2007-10-22 Thread Edward W. Porter
Richard,

You might be interested to know how much attention one of your articles
has gotten in the  mailto:agi@v2.listbox.com agi@v2.listbox.com mailing
list under the RE: Bogus Neuroscience [WAS Re: [agi] Human memory and
number of synapses thread, which has been dedicated to it.

Below is a message I sent in defense of your paper.

If you have comments to either me or the list I would be interested in
hearing them.

Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]


-Original Message-
From: Edward W. Porter [mailto:[EMAIL PROTECTED]
Sent: Monday, October 22, 2007 1:34 PM
To: agi@v2.listbox.com
Subject: RE: Bogus Neuroscience [WAS Re: [agi] Human memory and number of
synapses]



Dear Readers of the RE: Bogus Neuroscience Thread,

Because I am the one responsible for bringing to the attention of this
list the Granger article (“Engines of the brain: The computational
instruction set of human cognition”, by Richard Granger) that has caused
the recent  kerfuffle, this morning I took the time to do a reasonably
careful re-read of it.

I originally read it when I was interested in trying to learn about the
basil ganglia, and didn’t read in depth much beyond its initial
description of how it can serialize activations from a set of active nodes
and learn  patterns from such serial activations.

And it was this learning from temporally sequential activations that
caused me to cite the paper to Vladmir.

I had totally forgotten the article’s initial, arguably grossly
overreaching claims of its own importance, because that wasn’t what I
remembered as being important it.

Upon my complete re-reading this morning, I think, overall, this paper
represent valuable work.  I think its actual brain science is interesting
and important.  Its description of the basil ganglia certainly advanced my
knowledge substantially.  Its basic message about the cortico-thalamic
loops is important – that, in general, each cortical columns in the cortex
has two types of loops through the thalamus: a core loop that feeds
directly back to itself; and a matrix loop that feeds forward to widely
distributed portions of the cortex.

A significant portion of the paper is based on computer simulations.  This
makes the resulting observations somewhat questionable, since the accuracy
of neural models can vary tremendously.  But I welcome the general effect
the increasing use of computational neural modeling has had on brain
science.  It lets us create models much more complex than we ever could in
our own human minds, and then give them a spin.

I think his notion that the combination of the two loops through the
cortex allows spreading activation, particularly that between different
localized topological maps, to use a sequential coding to, in effect, bind
information is extremely interesting, and potentially valuable.

It takes a fair amount of thought to understand the significance of this.
Once, when we were both young and single and living in Manhattan I met a
woman at a party who worked as an Asian art specialist for one of the
world’s biggest art auction houses.  I told her I had seen an excellent
exhibition of 19th century Japanese art and artifacts and had blown me
away with its abstraction and minimalism.  She responded that in Japanese
literature and art it is often a sign of respect for the intelligence of
your readers and viewers to relay your message in as few words or as
little detail as possible.

In a similar vein, when Granger says his paper describes   “the basic
mental operations from which all complex behavioral and cognitive
abilities are constructed”   I think he assumes his intended readers will
be quite intelligent enough, well versed in brain and cognitive science,
and willing to take the time to understand the potential implications of
what he is saying.  I think he assumes that such reader, and to a certain
degree further research and though, will fill in much of what is left
unsaid.

If you think about the sequential grammar he describes and include the
ability for time dilation and compression he incorporates from other
papers I have not read, it would possibly, in conjunction with prior
knowledge, provide a mechanism for the learning, perception ,and recall of
compositional structures having all the invariance of Hawkins’s
hierarchical memory.  It not only provides for dealing with compositional
patterns that are static, but also ones that are temporal.  It also allows
patterns to be learned that have elements spanning multiple topological
regions of the brain.  This is interesting and quite valuable.

As I said above, it leaves many things unsaid and unclear.  For example,
does it activate all or multiple nodes in a cluster together or not?  Does
it always activate the most general cluster covering a given pattern, or
does it use some measure of how well a cluster fits input to select what,
and to what degree, cluster(s) in the 

Re: Bogus Neuroscience [WAS Re: [agi] Human memory and number of synapses]

2007-10-22 Thread Richard Loosemore

Edward W. Porter wrote:

Dear Readers of the RE: Bogus Neuroscience Thread,

Because I am the one responsible for bringing to the attention of this 
list the Granger article (“Engines of the brain: The computational 
instruction set of human cognition”, by Richard Granger) that has caused 
the recent  kerfuffle, this morning I took the time to do a reasonably 
careful re-read of it.


[snip]

In his Sun 10/21/2007 2:12 PM post Richard Loosemore cited failure to 
answer the following questions as indications of the paper’s worthlessness.


“RICHARD “How does it cope with the instance/generic distinction?”

I assume after the most general cluster, or the cluster
having the most activation from the current feature set,
spreads its activation through the matrix loop, then the
cluster most activated by the remaining features spreads
activation through the matrix loop.  This sequence can
continue to presumably any desired level of detail supported
by the current set of observed, remembered, or imagined
features to be communicated in the brain.  The added detail
from such a sequence of descriptions would distinguish an
instance from a generic description reprsented by just one
such description..


A misunnderstanding:  the question is how it can represent multiple 
copies of a concept that occur in a situation without getting confused 
about which is which.  If the appearance of one chair in a scene causes 
the [chair] neuron (or neurons, if they are a cluster) to fire, then 
what happens when you walk into a chair factory?  What happens when you 
try to understand a sentence in which there are several nouns:  does the 
[noun] node fire more than before, and if it does, how does this help 
you parse the sentence?


This is a DEEP issue:  you cannot just say that this will be handled by 
other neural machinery on top of the basic (neural-cluster = 
representation of generic thing) idea, because that other machinery is 
nontrivial, and potentially it will require the original (neural-cluster 
= representation of generic thing) idea to be abandoned completely.




“RICHARD “How does it allow top-down processes to operate in the 
recognition process?”


I don’t think there was anything said about this, but the
need for, and presence in the brain of, both top-down and
bottom-up processes is so well know as to have properly been
assumed.


Granted, but in a system in which the final state is determined by 
expectations as well as by incoming input, the dynamics of the system 
are potentially completely different, and all of Granger's assertions 
about the roles played by various neural structures may have to be 
completely abandoned in order to make allowance for that new dynamic.




“RICHARD “How are relationships between instances encoded?” ”

I assume the readers will understand how it handles temporal
relationships (if you add the time dilation and compression
mentioned above).  Spatial relationships would come from the
topology of V1 (but sensed spatial relationships can also be
build via a kohonen net SOM with temporal difference of
activiation time as the SOM’s similarity metric). 
Similarly, other higher order relationships can be built

from patterns in the space of hierarchical gen/comp pats
networks derived from inputs in these two basic dimensions
of space and time plus in the dimensions defined by other
sensory, emotional, and motor inputs.  [I consider motor
outputs as a type of input]. 


Again, no:  relationships are extremely dynamic:  any two concepts can 
be linked by a relationship at any moment, so the specific question is, 
if things are represented as clusters of neurons, how does the system 
set up a temporary connection between those clusters, given that there 
is not, in general, a direct link between any two neurons in the brain? 
 You cannot simply strengthen the link between your artichoke 
neuron and your basilisk neuron in order to form the relationship 
caused by my mention of both of them in the same sentence, because, in 
general, there may not be any axons going from one to the other.



“RICHARD “How are relationships abstracted?” 


By shared features.  He addresses how clusters tend to form
automatically.  These clusters are abstractions.


These are only clusters of things.  He has to address this issue 
separately for relationships which are connections or links between 
things.  The question is about types of links, and about how there are 
potentially an infinite number of different types of such links:  how 
are those different types represented and built and used?  Again, a 
simple neural connection is not good enough, because 

RE: Bogus Neuroscience [WAS Re: [agi] Human memory and number of synapses]

2007-10-22 Thread Edward W. Porter
Richard,

I will only respond to the below copied one of the questions in your last
message because of lack of time.   I pick this example because it was so
“DEEP” (to be heard in your mind with max reverb).  I hoped that if I
could give a halfway reasonable answer to it and if, just maybe, you could
open your mind (and that is one of the main issue in this thread), you
might actually also try to think how your other questions could be
answered.

In response to this “DEEP” question, I ask How do you, Richard Loosemore,
normally distinguish different instances of a given type.

By distinguishing characteristics?  (This would include things like little
dings on your car or the junk in its back seat that distinquish it from a
similar make and model of the same years and color. )

If so, that is handled by Granger’s system in the manner described in my
response to the question copied below.

Now when you are dealing with objects that have an identical appearance,
such as Diet Coke cans (the example I normally use when I think of this
problem), often the only thing you can distinguish them by is – again –
their distinguishing characteristics.  But in this case the distinguishing
characteristics would be things like their location, orientation, or
perhaps relationship to other objects.  It would also include implications
that can properly be drawn from or about such characteristics for the type
of thing involved.

For example, if you leave a Diet Coke can (can_1) downstairs in your
kitchen and go up to you bedroom and see an identical looking coke can
next to your bed, you would normally assume the can next to your bed was
not can_1, unless you had some explanation for how can_1 was moved next to
your bed.   (For purposes of dealing with the hardest part of the problem
we will assume all coke cans have been opened and have the same amount of
coke with roughly the same level of carbonation.)  If you go back down
stairs and see a Diet Coke can exactly where you left can_1, you will
assume it is can_1, itself, barring some reason to believe the can might
have been replaced with another, such as if you know someone was in your
kitchen during your absence.

All these types of inferences are based on generalities, often important
broad generalities like the persistence of objects, that take the learning
of even more basic or more primiative generalities (such as those needed
for object recognition, understanding the concept of physical objects,
the ability to see similarities and dissimilarities between objects, and
spatial and temporal models), all of which take millions of trillions of
machine opps and weeks or months of experience to learn.  So I hope you
will forgive me and Granger if we don’t explain them in detail.  (Goertzel
in Hidden Pattern, I think it is, actually gives an example of how an
AGI could learn object persistence.)

However, the whole notion of AGI is built on the premise that such things
can be learned by a machine architecture having certain generalized
capabilities and having something like the physical world to interact in
and with.  Those of us who are bullish on AGI think we already have a
pretty good ideas how to make system that can have the required
capabilities to learn such broad generalities, or at least get us much
closer to such a system, so we can get a much better understanding of what
more is needed, and then try to add it.

With such ideas of how to make an AGI, it become much easier to map the
various aspects of it into known, or hypothesized, operations in the
brain.  The features described in Granger’s paper, when combined with
other previous ideas on how the brain could function as an AGI, would seem
to describe a system having roughly the general capability to learn and
properly inference from all of the basic generalizations of the type I
described above, such as the persistence of objects, and what types of
objects move on their own, and with what probabilities under what
circumstances. For example, Granger's article explains how to learn
patterns, generalizations of pattersn, patterns of generalizations of
patterns, and with something like a hippocampus it could learn episodes,
and then patterns from episodes, and generalizations from patterns from
episodes, and patterns of generalazations from episodes, etc.

Yes, the Granger article, itself, does not describe all of the features
necessary for the brain to act as a general AGI, but when interpreted in
the context of enlightened AGI models, such as Novamente, and the current
knowledge and leading hypotheses in brain science, it is easy to imagine
how what he describes could play a very important role in solving even
mental problems as “DEEP” (again with reverb) as that of determining
whether the Diet Coke can on the table is the one you have been drinking
from, or someone else’s.

Has there been a little hand waving in the above explanation?  Yes, but if
you have a good understanding of AGI and its brain equivalent, you 

Re: Bogus Neuroscience [WAS Re: [agi] Human memory and number of synapses]

2007-10-22 Thread Russell Wallace
On 10/23/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 Still don't buy it. What the article amounts to is that speed-reading is
 fake. No kind of recognition beyond skimming (e.g. just ignoring a
 substantial proportion of the text) is called for to explain the observed
 performance.

And I'm saying nevermind articles, try it for yourself. I tried the
experiment, before I wrote that earlier post, it's easy to do. You'll
find you do in fact recognize (I'm making no claims about rate of
comprehension or retention, I'm only addressing the question of
recognition) many words simultaneously, in parallel, without needing
to saccade serially to each one.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56505374-0f2862


Re: Bogus Neuroscience [WAS Re: [agi] Human memory and number of synapses]

2007-10-22 Thread J Storrs Hall, PhD
On Monday 22 October 2007 08:01:55 pm, Richard Loosemore wrote:

 Did you ever try to parse a sentence with more than one noun in it?
 
 Well, all right:  but please be assured that the rest of us do in fact 
 do that.

Why make insulting personal remarkss instead of explaining your reasoning?
(RL, Sat Oct  6 02:48:54 2007)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56508702-7de092


Re: Bogus Neuroscience [WAS Re: [agi] Human memory and number of synapses]

2007-10-22 Thread J Storrs Hall, PhD
On Monday 22 October 2007 08:48:20 pm, Russell Wallace wrote:
 On 10/23/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
  Still don't buy it. What the article amounts to is that speed-reading is
  fake. No kind of recognition beyond skimming (e.g. just ignoring a
  substantial proportion of the text) is called for to explain the observed
  performance.
 
 And I'm saying nevermind articles, try it for yourself. I tried the
 experiment, before I wrote that earlier post, it's easy to do. You'll
 find you do in fact recognize (I'm making no claims about rate of
 comprehension or retention, I'm only addressing the question of
 recognition) many words simultaneously, in parallel, without needing
 to saccade serially to each one.

Still don't buy it. Saccades are normally well below the conscious level, and 
a vast majority of what goes on cognitively is not available to 
introspection. Any good reader gets to the point where the sentence meanings, 
not the words at all, are the only thing that breaks into the conscious 
level. (you can read with essentially complete semantic comprehension and 
still be quite unable to repeat any of the text verbatim.)

BTW, I'm not trying to say that no concurrent recognition happens in the 
brain -- I'm sure that it does. I merely maintain that I haven't seen any 
evidence to convince me that it occurs in that particular part of vision. 

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56509948-3b75bb


Re: Bogus Neuroscience [WAS Re: [agi] Human memory and number of synapses]

2007-10-22 Thread J Storrs Hall, PhD
On Monday 22 October 2007 09:33:24 pm, Edward W. Porter wrote:
 Richard,
...
 Are you capable of understanding how that might be considered insulting?

I think in all seriousness that he literally cannot understand. Richard's 
emotional interaction is very similar to that of some autistic people I have 
known. The recent spat over Turing completeness started when I made a remark 
I thought to be humorous -- *quoting exactly the words Richard had used to 
make the same joke* to someone else -- and he took the same words he had said 
as a disparaging insult when said to him.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56513821-de495c


Re: Bogus Neuroscience [WAS Re: [agi] Human memory and number of synapses]

2007-10-22 Thread Russell Wallace
On 10/23/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 Still don't buy it. Saccades are normally well below the conscious level, and
 a vast majority of what goes on cognitively is not available to
 introspection. Any good reader gets to the point where the sentence meanings,
 not the words at all, are the only thing that breaks into the conscious
 level. (you can read with essentially complete semantic comprehension and
 still be quite unable to repeat any of the text verbatim.)

Sure, but saccades and word recognition are like breathing - normally
they operate subconsciously, but you can become aware and take control
of them if you so choose. Again this isn't abstruse theory - try it
and see, the experiment can be done in seconds.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56514215-602795


Re: Bogus Neuroscience [WAS Re: [agi] Human memory and number of synapses]

2007-10-22 Thread J Storrs Hall, PhD
You can DO them consciously but that doesn't necessarily mean that you can 
intentionally become conscious of the ones you are doing unconsciously.

Try cutting a hole in a piece of paper and moving it smoothly across another 
page that has text on it. When your eye tracks the smoothly moving page, what 
appears through the hole is a blur.

Josh


On Monday 22 October 2007 10:23:12 pm, Russell Wallace wrote:
 On 10/23/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
  Still don't buy it. Saccades are normally well below the conscious level, 
and
  a vast majority of what goes on cognitively is not available to
  introspection. Any good reader gets to the point where the sentence 
meanings,
  not the words at all, are the only thing that breaks into the conscious
  level. (you can read with essentially complete semantic comprehension and
  still be quite unable to repeat any of the text verbatim.)
 
 Sure, but saccades and word recognition are like breathing - normally
 they operate subconsciously, but you can become aware and take control
 of them if you so choose. Again this isn't abstruse theory - try it
 and see, the experiment can be done in seconds.
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;
 
 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56519427-089861


Re: Bogus Neuroscience [WAS Re: [agi] Human memory and number of synapses]

2007-10-22 Thread Russell Wallace
On 10/23/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 You can DO them consciously but that doesn't necessarily mean that you can
 intentionally become conscious of the ones you are doing unconsciously.

One every few seconds happens involuntarily, when I try to not let any
through at all; but it's not unnoticeable, when concentrating, in the
same way that breathing isn't. I don't believe it's that much of a
freakish talent :)

 Try cutting a hole in a piece of paper and moving it smoothly across another
 page that has text on it. When your eye tracks the smoothly moving page, what
 appears through the hole is a blur.

Absolutely - as can be readily verified right now just by focusing on
one finger while moving it across your monitor; it's easy to be
conscious of the fact that the contents of the screen are a blur.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56545873-bf81cb


RE: [agi] An AGI Test/Prize

2007-10-22 Thread John G. Rose
Vladimir,

 

I'm using system as kind of a general word for a set and operator(s).

 

You are understanding it correctly except templates is not right. The
templates are actually a vast internal complex of structure which includes
morphisms which are like templates.

 

But you are right it does seem like a categorization approach. When you say
categorization approach can you point out an example of that that I can look
into?

 

John

 

 

From: Vladimir Nesov [mailto:[EMAIL PROTECTED] 



John,

What do you mean by system? You imply that these objects have a structure,
or equivalently are abstract models of original input. So, you take original
input in whatever form it's coming in and based on it you create instances
of abstract structures according to templates that are known to system. Is
it essentially correct? If so, it's very similar to categorization approach:
you observe experience indirectly, through categorization structure that
current perception system produces for it. 

If you need to model a boolean based space for some sort of sampled data
world it sees and correlates to that, the thing would generate a boolean
algebra modeled and represented onto that informational structure for that
particular space instance being studied. For example electronics theory -
it would need to model that world as an instance based on electronics
descriptor items and operators in that particular world or space set.
Electronics theory world could be spat out as something very minor that it
understands.


So, it would assembled a description 'in place' from local rules, based on
information provided by specific experience. Is it a correct restatement?

 

Not sure if my terminology is very standard but do you understand the
thinking? It may very well be morphic to other AGI structures or theories I
don't know but I kind of like the way it works represented as such because
it seems simple and not messy but very comprehensive and has other good
qualities.


It's very vague, but can with a stretch of imagination be mapped to many
other views. It's unclear with this level of detail.
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56546239-4cc4b3