Re: [agi] Hutter - A fundamental misdirection?

2010-07-07 Thread Gabriel Recchia
 In short, instead of a pot of neurons, we might instead have a pot of
dozens of types of
 neurons that each have their own complex rules regarding what other types
of neurons they
 can connect to, and how they process information...

 ...there is plenty of evidence (from the slowness of evolution, the large
number (~200)
 of neuron types, etc.), that it is many-layered and quite complex...

The disconnect between the low-level neural hardware and the implementation
of algorithms that build conceptual spaces via dimensionality
reduction--which generally ignore facts such as the existence of different
types of neurons, the apparently hierarchical organization of neocortex,
etc.--seems significant. Have there been attempts to develop computational
models capable of LSA-style feats (e.g., constructing a vector space in
which words with similar meanings tend to be relatively close to each other)
that take into account basic facts about how neurons actually operate
(ideally in a more sophisticated way than the nodes of early connectionist
networks which, as we now know, are not particularly neuron-like at all)? If
so, I would love to know about them.


On Tue, Jun 29, 2010 at 3:02 PM, Ian Parker ianpark...@gmail.com wrote:

 The paper seems very similar in principle to LSA. What you need for a
 concept vector  (or position) is the application of LSA followed by K-Means
 which will give you your concept clusters.

 I would not knock Hutter too much. After all LSA reduces {primavera,
 mamanthal, salsa, resorte} to one word giving 2 bits saving on Hutter.


   - Ian Parker


 On 29 June 2010 07:32, rob levy r.p.l...@gmail.com wrote:

 Sorry, the link I included was invalid, this is what I meant:


 http://www.geog.ucsb.edu/~raubal/Publications/RefConferences/ICSC_2009_AdamsRaubal_Camera-FINAL.pdfhttp://www.geog.ucsb.edu/%7Eraubal/Publications/RefConferences/ICSC_2009_AdamsRaubal_Camera-FINAL.pdf


 On Tue, Jun 29, 2010 at 2:28 AM, rob levy r.p.l...@gmail.com wrote:

 On Mon, Jun 28, 2010 at 5:23 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 Rob,

 I just LOVE opaque postings, because they identify people who see things
 differently than I do. I'm not sure what you are saying here, so I'll make
 some random responses to exhibit my ignorance and elicit more 
 explanation.


 I think based on what you wrote, you understood (mostly) what I was
 trying to get across.  So I'm glad it was at least quasi-intelligible. :)


  It sounds like this is a finer measure than the dimensionality that I
 was referencing. However, I don't see how to reduce anything as quantized 
 as
 dimensionality into finer measures. Can you say some more about this?


 I was just referencing Gardenfors' research program of conceptual
 spaces (I was intentionally vague about committing to this fully though
 because I don't necessarily think this is the whole answer).  Page 2 of this
 article summarizes it pretty succinctly: http://http://goog_1627994790
 www.geog.ucsb.edu/.../ICSC_2009_AdamsRaubal_Camera-FINAL.pdf



 However, different people's brains, even the brains of identical twins,
 have DIFFERENT mappings. This would seem to mandate experience-formed
 topology.



 Yes definitely.


  Since these conceptual spaces that structure sensorimotor
 expectation/prediction (including in higher order embodied exploration of
 concepts I think) are multidimensional spaces, it seems likely that some
 kind of neural computation over these spaces must occur,


 I agree.


 though I wonder what it actually would be in terms of neurons, (and if
 that matters).


 I don't see any route to the answer except via neurons.


 I agree this is true of natural intelligence, though maybe in modeling,
 the neural level can be shortcut to the topo map level without recourse to
 neural computation (use some more straightforward computation like matrix
 algebra instead).

 Rob


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Hutter - A fundamental misdirection?

2010-07-07 Thread Ian Parker
There is very little. Someone do research. Here is a paper on language
fitness.

http://kybele.psych.cornell.edu/~edelman/elcfinal.pdf

http://kybele.psych.cornell.edu/~edelman/elcfinal.pdfLSA is *not* discussed
nor is any fitness concept with the language itself. Similar sounding (or
written) words must be capable of disambiguation using LSA, otherwise the
language would be unfit. Let us have a *gedanken* language where spring
the example I have taken with my Spanish cannot be disambiguated. Suppose *
spring* meant *step forward, *as well as its other meanings. If I am
learning to dance I do not think about *primavera, resorte *or*
mamanthal* but
I do think about *salsa*. If I did not know whether I was to jump or put
my leg forward it would be extremely confusing. To my knowledge fitness in
this context has not been discussed.

In fact perhaps the only work that is relevant is my own which I posted here
some time ago. The reduction in entropy (compression) obtained with LSA was
disappointing. The different meanings (different words in Spanish  other
languages) are compressed more readily. Both Spanish and English have a
degree of fitness which (just possibly) is definable in LSA terms.


  - Ian Parker

On 7 July 2010 17:12, Gabriel Recchia grecc...@gmail.com wrote:

  In short, instead of a pot of neurons, we might instead have a pot of
 dozens of types of
  neurons that each have their own complex rules regarding what other types
 of neurons they
  can connect to, and how they process information...

  ...there is plenty of evidence (from the slowness of evolution, the large
 number (~200)
  of neuron types, etc.), that it is many-layered and quite complex...

 The disconnect between the low-level neural hardware and the implementation
 of algorithms that build conceptual spaces via dimensionality
 reduction--which generally ignore facts such as the existence of different
 types of neurons, the apparently hierarchical organization of neocortex,
 etc.--seems significant. Have there been attempts to develop computational
 models capable of LSA-style feats (e.g., constructing a vector space in
 which words with similar meanings tend to be relatively close to each other)
 that take into account basic facts about how neurons actually operate
 (ideally in a more sophisticated way than the nodes of early connectionist
 networks which, as we now know, are not particularly neuron-like at all)? If
 so, I would love to know about them.


 On Tue, Jun 29, 2010 at 3:02 PM, Ian Parker ianpark...@gmail.com wrote:

 The paper seems very similar in principle to LSA. What you need for a
 concept vector  (or position) is the application of LSA followed by K-Means
 which will give you your concept clusters.

 I would not knock Hutter too much. After all LSA reduces {primavera,
 mamanthal, salsa, resorte} to one word giving 2 bits saving on Hutter.


   - Ian Parker


 On 29 June 2010 07:32, rob levy r.p.l...@gmail.com wrote:

 Sorry, the link I included was invalid, this is what I meant:


 http://www.geog.ucsb.edu/~raubal/Publications/RefConferences/ICSC_2009_AdamsRaubal_Camera-FINAL.pdfhttp://www.geog.ucsb.edu/%7Eraubal/Publications/RefConferences/ICSC_2009_AdamsRaubal_Camera-FINAL.pdf


 On Tue, Jun 29, 2010 at 2:28 AM, rob levy r.p.l...@gmail.com wrote:

 On Mon, Jun 28, 2010 at 5:23 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 Rob,

 I just LOVE opaque postings, because they identify people who see
 things differently than I do. I'm not sure what you are saying here, so 
 I'll
 make some random responses to exhibit my ignorance and elicit more
 explanation.


 I think based on what you wrote, you understood (mostly) what I was
 trying to get across.  So I'm glad it was at least quasi-intelligible. :)


  It sounds like this is a finer measure than the dimensionality that
 I was referencing. However, I don't see how to reduce anything as 
 quantized
 as dimensionality into finer measures. Can you say some more about this?


 I was just referencing Gardenfors' research program of conceptual
 spaces (I was intentionally vague about committing to this fully though
 because I don't necessarily think this is the whole answer).  Page 2 of 
 this
 article summarizes it pretty succinctly: http://http://goog_1627994790
 www.geog.ucsb.edu/.../ICSC_2009_AdamsRaubal_Camera-FINAL.pdf



 However, different people's brains, even the brains of identical twins,
 have DIFFERENT mappings. This would seem to mandate experience-formed
 topology.



 Yes definitely.


   Since these conceptual spaces that structure sensorimotor
 expectation/prediction (including in higher order embodied exploration of
 concepts I think) are multidimensional spaces, it seems likely that some
 kind of neural computation over these spaces must occur,


 I agree.


 though I wonder what it actually would be in terms of neurons, (and if
 that matters).


 I don't see any route to the answer except via neurons.


 I agree this is true of natural 

Re: [agi] Hutter - A fundamental misdirection?

2010-07-07 Thread Matt Mahoney
Gorrell and Webb describe a neural implementation of LSA that seems more 
biologically plausible than the usual matrix factoring implementation.
http://www.dcs.shef.ac.uk/~genevieve/gorrell_webb.pdf
 
In the usual implementation, a word-word matrix A is factored to A = USV where 
S 
is diagonal (containing eigenvalues), and then the smaller elements of S are 
discarded. In the Gorrell model, U and V are the weights of a 3 layer neural 
network mapping words to words, and the nonzero elements of S represent the 
semantic space in the middle layer. As the network is trained, neurons are 
added 
to S. Thus, the network is trained online in a single pass, unlike factoring, 
which is offline.

-- Matt Mahoney, matmaho...@yahoo.com





From: Gabriel Recchia grecc...@gmail.com
To: agi agi@v2.listbox.com
Sent: Wed, July 7, 2010 12:12:00 PM
Subject: Re: [agi] Hutter - A fundamental misdirection?

 In short, instead of a pot of neurons, we might instead have a pot of 
 dozens 
of types of 

 neurons that each have their own complex rules regarding what other types of 
neurons they 

 can connect to, and how they process information...

 ...there is plenty of evidence (from the slowness of evolution, the large 
number (~200) 

 of neuron types, etc.), that it is many-layered and quite complex...

The disconnect between the low-level neural hardware and the implementation of 
algorithms that build conceptual spaces via dimensionality reduction--which 
generally ignore facts such as the existence of different types of neurons, the 
apparently hierarchical organization of neocortex, etc.--seems significant. 
Have 
there been attempts to develop computational models capable of LSA-style feats 
(e.g., constructing a vector space in which words with similar meanings tend to 
be relatively close to each other) that take into account basic facts about how 
neurons actually operate (ideally in a more sophisticated way than the nodes of 
early connectionist networks which, as we now know, are not particularly 
neuron-like at all)? If so, I would love to know about them.



On Tue, Jun 29, 2010 at 3:02 PM, Ian Parker ianpark...@gmail.com wrote:

The paper seems very similar in principle to LSA. What you need for a concept 
vector  (or position) is the application of LSA followed by K-Means which will 
give you your concept clusters.


I would not knock Hutter too much. After all LSA reduces {primavera, 
mamanthal, 
salsa, resorte} to one word giving 2 bits saving on Hutter.




  - Ian Parker



On 29 June 2010 07:32, rob levy r.p.l...@gmail.com wrote:

Sorry, the link I included was invalid, this is what I meant: 


http://www.geog.ucsb.edu/~raubal/Publications/RefConferences/ICSC_2009_AdamsRaubal_Camera-FINAL.pdf




On Tue, Jun 29, 2010 at 2:28 AM, rob levy r.p.l...@gmail.com wrote:

On Mon, Jun 28, 2010 at 5:23 PM, Steve Richfield steve.richfi...@gmail.com 
wrote:

Rob,

I just LOVE opaque postings, because they identify people who see things 
differently than I do. I'm not sure what you are saying here, so I'll make 
some 
random responses to exhibit my ignorance and elicit more explanation.




I think based on what you wrote, you understood (mostly) what I was trying 
to 
get across.  So I'm glad it was at least quasi-intelligible. :)
 
 It sounds like this is a finer measure than the dimensionality that I was 
referencing. However, I don't see how to reduce anything as quantized as 
dimensionality into finer measures. Can you say some more about this?




I was just referencing Gardenfors' research program of conceptual spaces 
(I 
was intentionally vague about committing to this fully though because I 
don't 
necessarily think this is the whole answer).  Page 2 of this article 
summarizes 
it pretty succinctly: 
http://www.geog.ucsb.edu/.../ICSC_2009_AdamsRaubal_Camera-FINAL.pdf


 
However, different people's brains, even the brains of identical twins, have 
DIFFERENT mappings. This would seem to mandate experience-formed topology.
 



Yes definitely.
 
Since these conceptual spaces that structure sensorimotor 
expectation/prediction 
(including in higher order embodied exploration of concepts I think) are 
multidimensional spaces, it seems likely that some kind of neural 
computation 
over these spaces must occur,

I agree.
 

though I wonder what it actually would be in terms of neurons, (and if that 
matters).

I don't see any route to the answer except via neurons.


I agree this is true of natural intelligence, though maybe in modeling, the 
neural level can be shortcut to the topo map level without recourse to 
neural 
computation (use some more straightforward computation like matrix algebra 
instead).

Rob

agi | Archives  | Modify Your Subscription  

agi | Archives  | Modify Your Subscription  

agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https

Re: [agi] Hutter - A fundamental misdirection?

2010-06-29 Thread rob levy
On Mon, Jun 28, 2010 at 5:23 PM, Steve Richfield
steve.richfi...@gmail.comwrote:

 Rob,

 I just LOVE opaque postings, because they identify people who see things
 differently than I do. I'm not sure what you are saying here, so I'll make
 some random responses to exhibit my ignorance and elicit more explanation.


I think based on what you wrote, you understood (mostly) what I was trying
to get across.  So I'm glad it was at least quasi-intelligible. :)


  It sounds like this is a finer measure than the dimensionality that I
 was referencing. However, I don't see how to reduce anything as quantized as
 dimensionality into finer measures. Can you say some more about this?


I was just referencing Gardenfors' research program of conceptual spaces
(I was intentionally vague about committing to this fully though because I
don't necessarily think this is the whole answer).  Page 2 of this article
summarizes it pretty succinctly: http:// goog_1627994790
www.geog.ucsb.edu/.../ICSC_2009_AdamsRaubal_Camera-FINAL.pdf



 However, different people's brains, even the brains of identical twins,
 have DIFFERENT mappings. This would seem to mandate experience-formed
 topology.



Yes definitely.


 Since these conceptual spaces that structure sensorimotor
 expectation/prediction (including in higher order embodied exploration of
 concepts I think) are multidimensional spaces, it seems likely that some
 kind of neural computation over these spaces must occur,


 I agree.


 though I wonder what it actually would be in terms of neurons, (and if
 that matters).


 I don't see any route to the answer except via neurons.


I agree this is true of natural intelligence, though maybe in modeling, the
neural level can be shortcut to the topo map level without recourse to
neural computation (use some more straightforward computation like matrix
algebra instead).

Rob



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Hutter - A fundamental misdirection?

2010-06-29 Thread rob levy
Sorry, the link I included was invalid, this is what I meant:

http://www.geog.ucsb.edu/~raubal/Publications/RefConferences/ICSC_2009_AdamsRaubal_Camera-FINAL.pdf

On Tue, Jun 29, 2010 at 2:28 AM, rob levy r.p.l...@gmail.com wrote:

 On Mon, Jun 28, 2010 at 5:23 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 Rob,

 I just LOVE opaque postings, because they identify people who see things
 differently than I do. I'm not sure what you are saying here, so I'll make
 some random responses to exhibit my ignorance and elicit more explanation.


 I think based on what you wrote, you understood (mostly) what I was trying
 to get across.  So I'm glad it was at least quasi-intelligible. :)


  It sounds like this is a finer measure than the dimensionality that I
 was referencing. However, I don't see how to reduce anything as quantized as
 dimensionality into finer measures. Can you say some more about this?


 I was just referencing Gardenfors' research program of conceptual spaces
 (I was intentionally vague about committing to this fully though because I
 don't necessarily think this is the whole answer).  Page 2 of this article
 summarizes it pretty succinctly: http:// http://goog_1627994790
 www.geog.ucsb.edu/.../ICSC_2009_AdamsRaubal_Camera-FINAL.pdf



 However, different people's brains, even the brains of identical twins,
 have DIFFERENT mappings. This would seem to mandate experience-formed
 topology.



 Yes definitely.


  Since these conceptual spaces that structure sensorimotor
 expectation/prediction (including in higher order embodied exploration of
 concepts I think) are multidimensional spaces, it seems likely that some
 kind of neural computation over these spaces must occur,


 I agree.


 though I wonder what it actually would be in terms of neurons, (and if
 that matters).


 I don't see any route to the answer except via neurons.


 I agree this is true of natural intelligence, though maybe in modeling, the
 neural level can be shortcut to the topo map level without recourse to
 neural computation (use some more straightforward computation like matrix
 algebra instead).

 Rob




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Hutter - A fundamental misdirection?

2010-06-29 Thread Ian Parker
The paper seems very similar in principle to LSA. What you need for a
concept vector  (or position) is the application of LSA followed by K-Means
which will give you your concept clusters.

I would not knock Hutter too much. After all LSA reduces {primavera,
mamanthal, salsa, resorte} to one word giving 2 bits saving on Hutter.


  - Ian Parker

On 29 June 2010 07:32, rob levy r.p.l...@gmail.com wrote:

 Sorry, the link I included was invalid, this is what I meant:


 http://www.geog.ucsb.edu/~raubal/Publications/RefConferences/ICSC_2009_AdamsRaubal_Camera-FINAL.pdf


 On Tue, Jun 29, 2010 at 2:28 AM, rob levy r.p.l...@gmail.com wrote:

 On Mon, Jun 28, 2010 at 5:23 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 Rob,

 I just LOVE opaque postings, because they identify people who see things
 differently than I do. I'm not sure what you are saying here, so I'll make
 some random responses to exhibit my ignorance and elicit more explanation.


 I think based on what you wrote, you understood (mostly) what I was trying
 to get across.  So I'm glad it was at least quasi-intelligible. :)


  It sounds like this is a finer measure than the dimensionality that I
 was referencing. However, I don't see how to reduce anything as quantized as
 dimensionality into finer measures. Can you say some more about this?


 I was just referencing Gardenfors' research program of conceptual spaces
 (I was intentionally vague about committing to this fully though because I
 don't necessarily think this is the whole answer).  Page 2 of this article
 summarizes it pretty succinctly: http:// http://goog_1627994790
 www.geog.ucsb.edu/.../ICSC_2009_AdamsRaubal_Camera-FINAL.pdf



 However, different people's brains, even the brains of identical twins,
 have DIFFERENT mappings. This would seem to mandate experience-formed
 topology.



 Yes definitely.


  Since these conceptual spaces that structure sensorimotor
 expectation/prediction (including in higher order embodied exploration of
 concepts I think) are multidimensional spaces, it seems likely that some
 kind of neural computation over these spaces must occur,


 I agree.


 though I wonder what it actually would be in terms of neurons, (and if
 that matters).


 I don't see any route to the answer except via neurons.


 I agree this is true of natural intelligence, though maybe in modeling,
 the neural level can be shortcut to the topo map level without recourse to
 neural computation (use some more straightforward computation like matrix
 algebra instead).

 Rob


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Hutter - A fundamental misdirection?

2010-06-28 Thread rob levy
In order to have perceptual/conceptual similarity, it might make sense that
there is distance metric over conceptual spaces mapping (ala Gardenfors or
something like this theory)  underlying how the experience of reasoning
through is carried out.  This has the advantage of being motivated by
neuroscience findings (which are seldom convincing, but in this case it is
basic solid neuroscience research) that there are topographic maps in the
brain.  Since these conceptual spaces that structure sensorimotor
expectation/prediction (including in higher order embodied exploration of
concepts I think) are multidimensional spaces, it seems likely that some
kind of neural computation over these spaces must occur, though I wonder
what it actually would be in terms of neurons, (and if that matters).

But that is different from what would be considered quantitative reasoning,
because from the phenomenological perspective the person is training
sensorimotor expectations by perceiving and doing.  And creative conceptual
shifts (or recognition of novel perceptual categories) can also be explained
by this feedback between trained topographic maps and embodied interaction
with environment (experienced at the ecological level as sensorimotor
expectations (driven by neural maps). Sensorimotor expectation is the basis
of dynamics of perception and coceptualization).


On Sun, Jun 27, 2010 at 7:24 PM, Ben Goertzel b...@goertzel.org wrote:



 On Sun, Jun 27, 2010 at 7:09 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 Ben,

 On Sun, Jun 27, 2010 at 3:47 PM, Ben Goertzel b...@goertzel.org wrote:

  know what dimensional analysis is, but it would be great if you could
 give an example of how it's useful for everyday commonsense reasoning such
 as, say, a service robot might need to do to figure out how to clean a
 house...


 How much detergent will it need to clean the floors? Hmmm, we need to know
 ounces. We have the length and width of the floor, and the bottle says to
 use 1 oz/M^2. How could we manipulate two M-dimensioned quantities and 1
 oz/M^2 dimensioned quantity to get oz? The only way would seem to be to
 multiply all three numbers together to get ounces. This WITHOUT
 understanding things like surface area, utilization, etc.



 I think that the El Salvadorean maids who come to clean my house
 occasionally, solve this problem without any dimensional analysis or any
 quantitative reasoning at all...

 Probably they solve it based on nearest-neighbor matching against past
 experiences cleaning other dirty floors with water in similarly sized and
 shaped buckets...

 -- ben g


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Hutter - A fundamental misdirection?

2010-06-28 Thread Steve Richfield
Rob,

I just LOVE opaque postings, because they identify people who see things
differently than I do. I'm not sure what you are saying here, so I'll make
some random responses to exhibit my ignorance and elicit more explanation.

On Mon, Jun 28, 2010 at 9:53 AM, rob levy r.p.l...@gmail.com wrote:

 In order to have perceptual/conceptual similarity, it might make sense that
 there is distance metric over conceptual spaces mapping


 It sounds like this is a finer measure than the dimensionality that I was
referencing. However, I don't see how to reduce anything as quantized as
dimensionality into finer measures. Can you say some more about this?

(ala Gardenfors or something like this theory)  underlying how the
 experience of reasoning through is carried out.

This has the advantage of being motivated by neuroscience findings (which
 are seldom convincing, but in this case it is basic solid neuroscience
 research) that there are topographic maps in the brain.


However, different people's brains, even the brains of identical twins, have
DIFFERENT mappings. This would seem to mandate experience-formed topology.


 Since these conceptual spaces that structure sensorimotor
 expectation/prediction (including in higher order embodied exploration of
 concepts I think) are multidimensional spaces, it seems likely that some
 kind of neural computation over these spaces must occur,


I agree.


 though I wonder what it actually would be in terms of neurons, (and if that
 matters).


I don't see any route to the answer except via neurons.


 But that is different from what would be considered quantitative reasoning,
 because from the phenomenological perspective the person is training
 sensorimotor expectations by perceiving and doing.  And creative conceptual
 shifts (or recognition of novel perceptual categories) can also be explained
 by this feedback between trained topographic maps and embodied interaction
 with environment (experienced at the ecological level as sensorimotor
 expectations (driven by neural maps). Sensorimotor expectation is the basis
 of dynamics of perception and coceptualization).


All of which is computation of various sorts, the basics of which need to be
understood.

Steve
=

 On Sun, Jun 27, 2010 at 7:24 PM, Ben Goertzel b...@goertzel.org wrote:



 On Sun, Jun 27, 2010 at 7:09 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 Ben,

 On Sun, Jun 27, 2010 at 3:47 PM, Ben Goertzel b...@goertzel.org wrote:

  know what dimensional analysis is, but it would be great if you could
 give an example of how it's useful for everyday commonsense reasoning such
 as, say, a service robot might need to do to figure out how to clean a
 house...


 How much detergent will it need to clean the floors? Hmmm, we need to
 know ounces. We have the length and width of the floor, and the bottle says
 to use 1 oz/M^2. How could we manipulate two M-dimensioned quantities and 1
 oz/M^2 dimensioned quantity to get oz? The only way would seem to be to
 multiply all three numbers together to get ounces. This WITHOUT
 understanding things like surface area, utilization, etc.



 I think that the El Salvadorean maids who come to clean my house
 occasionally, solve this problem without any dimensional analysis or any
 quantitative reasoning at all...

 Probably they solve it based on nearest-neighbor matching against past
 experiences cleaning other dirty floors with water in similarly sized and
 shaped buckets...

 -- ben g


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Ben Goertzel
Hi Steve,

A few comments...

1)
Nobody is trying to implement Hutter's AIXI design, it's a mathematical
design intended as a proof of principle

2)
Within Hutter's framework, one calculates the shortest program that explains
the data, where shortest is measured on Turing  machine M.   Given a
sufficient number of observations, the choice of M doesn't matter and AIXI
will eventually learn any computable reward pattern.  However, choosing the
right M can greatly accelerate learning.  In the case of a physical AGI
system, choosing M to incorporate the correct laws of physics would
obviously accelerate learning considerably.

3)
Many AGI designs try to incorporate prior understanding of the structure 
properties of the physical world, in various ways.  I have a whole chapter
on this in my forthcoming book on OpenCog  E.g. OpenCog's design
includes a physics-engine, which is used directly and to aid with
inferential extrapolations...

So I agree with most of your points, but I don't find them original except
in phrasing ;)

... ben


On Sun, Jun 27, 2010 at 2:30 PM, Steve Richfield
steve.richfi...@gmail.comwrote:

 Ben, et al,

 *I think I may finally grok the fundamental misdirection that current AGI
 thinking has taken!

 *This is a bit subtle, and hence subject to misunderstanding. Therefore I
 will first attempt to explain what I see, WITHOUT so much trying to convince
 you (or anyone) that it is necessarily correct. Once I convey my vision,
 then let the chips fall where they may.

 On Sun, Jun 27, 2010 at 6:35 AM, Ben Goertzel b...@goertzel.org wrote:

 Hutter's AIXI for instance works [very roughly speaking] by choosing the
 most compact program that, based on historical data, would have yielded
 maximum reward


 ... and there it is! What did I see?

 Example applicable to the lengthy following discussion:
 1 - 2
 2 - 2
 3 - 2
 4 - 2
 5 - ?
 What is ?.

 Now, I'll tell you that the left column represents the distance along a 4.5
 unit long table, and the right column represents the distance above the
 floor that you will be as your walk the length of the table. Knowing this,
 without ANY supporting physical experience, I would guess ? to be zero, or
 maybe a little more if I were to step off of the table and land onto
 something lower, like the shoes that I left there.

 In an imaginary world where a GI boots up with a complete understanding of
 physics, etc., we wouldn't prefer the simplest program at all, but rather
 the simplest representation of the real world that is not physics/math *in
 *consistent with our observations. All observations would be presumed to
 be consistent with the response curves of our sensors, showing a world in
 which Newton's laws prevail, etc. Armed with these presumptions, our
 physics-complete AGI would look for the simplest set of *UN*observed
 phenomena that explained the observed phenomena. This theory of a
 physics-complete AGI seems undeniable, but of course, we are NOT born
 physics-complete - or are we?!

 This all comes down to the limits of representational math. At great risk
 of hand-waving on a keyboard, I'll try to explain by pseudo-translating the
 concepts into NN/AGI terms.

 We all know about layering and columns in neural systems, and understand
 Bayesian math. However, let's dig a little deeper into exactly what is being
 represented by the outputs (or terms for died-in-the-wool AGIers). All
 physical quantities are well known to have value, significance, and
 dimensionality. Neurons/Terms (N/T) could easily be protein-tagged as to the
 dimensionality that their functionality is capable of producing, so that
 only compatible N/Ts could connect to them. However, let's dig a little
 deeper into dimensionality

 Physicists think we live in an MKS (Meters, Kilograms, Seconds) world, and
 that all dimensionality can be reduced to MKS. For physics purposes they may
 be right (see challenge below), but maybe for information processing
 purposes, they are missing some important things.

 *Challenge to MKS:* Note that some physicists and most astronomers utilize
 *dimensional analysis* where they experimentally play with the
 dimensions of observations to inductively find manipulations that would
 yield the dimensions of unobservable quantities, e.g. the mass of a star,
 and then run the numbers through the same manipulation to see if the results
 at least have the right exponent. However, many/most such manipulations
 produce nonsense, so they simply use this technique to jump from
 observations to a list of prospective results with wildly different
 exponents, and discard the results with the ridiculous exponents to find the
 correct result. The frequent failures of this process indirectly
 demonstrates that there is more to dimensionality (and hence physics) than
 just MKS. Let's accept that, and presume that neurons must have already
 dealt with whatever is missing from current thought.

 Consider, there is some (hopefully finite) set of reasonable 

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Steve Richfield
Ben,

What I saw as my central thesis is that propagating carefully conceived
dimensionality information along with classical information could greatly
improve the cognitive process, by FORCING reasonable physics WITHOUT having
to understand (by present concepts of what understanding means) physics.
Hutter was just a foil to explain my thought. Note again my comments
regarding how physicists and astronomers understand some processes though
dimensional analysis that involves NONE of the sorts of understanding
that you might think necessary, yet can predictably come up with the right
answers.

Are you up on the basics of dimensional analysis? The reality is that it is
quite imperfect, but is often able to yield a short list of answers, with
the correct one being somewhere in the list. Usually, the wrong answers are
wildly wrong (they are probably computing something, but NOT what you might
be interested in), and are hence easily eliminated. I suspect that neurons
might be doing much the same, as could formulaic implementations like (most)
present AGI efforts. This might explain natural architecture and guide
human architectural efforts.

In short, instead of a pot of neurons, we might instead have a pot of
dozens of types of neurons that each have their own complex rules regarding
what other types of neurons they can connect to, and how they process
information. Architecture might involve deciding how many of each type to
provide, and what types to put adjacent to what other types, rather than the
more detailed concept now usually thought to exist.

Thanks for helping me wring my thought out here.

Steve
=
On Sun, Jun 27, 2010 at 2:49 PM, Ben Goertzel b...@goertzel.org wrote:


 Hi Steve,

 A few comments...

 1)
 Nobody is trying to implement Hutter's AIXI design, it's a mathematical
 design intended as a proof of principle

 2)
 Within Hutter's framework, one calculates the shortest program that
 explains the data, where shortest is measured on Turing  machine M.
 Given a sufficient number of observations, the choice of M doesn't matter
 and AIXI will eventually learn any computable reward pattern.  However,
 choosing the right M can greatly accelerate learning.  In the case of a
 physical AGI system, choosing M to incorporate the correct laws of physics
 would obviously accelerate learning considerably.

 3)
 Many AGI designs try to incorporate prior understanding of the structure 
 properties of the physical world, in various ways.  I have a whole chapter
 on this in my forthcoming book on OpenCog  E.g. OpenCog's design
 includes a physics-engine, which is used directly and to aid with
 inferential extrapolations...

 So I agree with most of your points, but I don't find them original except
 in phrasing ;)

 ... ben


 On Sun, Jun 27, 2010 at 2:30 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 Ben, et al,

 *I think I may finally grok the fundamental misdirection that current AGI
 thinking has taken!

 *This is a bit subtle, and hence subject to misunderstanding. Therefore I
 will first attempt to explain what I see, WITHOUT so much trying to convince
 you (or anyone) that it is necessarily correct. Once I convey my vision,
 then let the chips fall where they may.

 On Sun, Jun 27, 2010 at 6:35 AM, Ben Goertzel b...@goertzel.org wrote:

 Hutter's AIXI for instance works [very roughly speaking] by choosing the
 most compact program that, based on historical data, would have yielded
 maximum reward


 ... and there it is! What did I see?

 Example applicable to the lengthy following discussion:
 1 - 2
 2 - 2
 3 - 2
 4 - 2
 5 - ?
 What is ?.

 Now, I'll tell you that the left column represents the distance along a
 4.5 unit long table, and the right column represents the distance above the
 floor that you will be as your walk the length of the table. Knowing this,
 without ANY supporting physical experience, I would guess ? to be zero, or
 maybe a little more if I were to step off of the table and land onto
 something lower, like the shoes that I left there.

 In an imaginary world where a GI boots up with a complete understanding of
 physics, etc., we wouldn't prefer the simplest program at all, but rather
 the simplest representation of the real world that is not physics/math *
 in*consistent with our observations. All observations would be presumed
 to be consistent with the response curves of our sensors, showing a world in
 which Newton's laws prevail, etc. Armed with these presumptions, our
 physics-complete AGI would look for the simplest set of *UN*observed
 phenomena that explained the observed phenomena. This theory of a
 physics-complete AGI seems undeniable, but of course, we are NOT born
 physics-complete - or are we?!

 This all comes down to the limits of representational math. At great risk
 of hand-waving on a keyboard, I'll try to explain by pseudo-translating the
 concepts into NN/AGI terms.

 We all know about layering and columns in neural systems, and 

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Ben Goertzel
Steve,

I know what dimensional analysis is, but it would be great if you could give
an example of how it's useful for everyday commonsense reasoning such as,
say, a service robot might need to do to figure out how to clean a house...

thx
ben

On Sun, Jun 27, 2010 at 6:43 PM, Steve Richfield
steve.richfi...@gmail.comwrote:

 Ben,

 What I saw as my central thesis is that propagating carefully conceived
 dimensionality information along with classical information could greatly
 improve the cognitive process, by FORCING reasonable physics WITHOUT having
 to understand (by present concepts of what understanding means) physics.
 Hutter was just a foil to explain my thought. Note again my comments
 regarding how physicists and astronomers understand some processes though
 dimensional analysis that involves NONE of the sorts of understanding
 that you might think necessary, yet can predictably come up with the right
 answers.

 Are you up on the basics of dimensional analysis? The reality is that it is
 quite imperfect, but is often able to yield a short list of answers, with
 the correct one being somewhere in the list. Usually, the wrong answers are
 wildly wrong (they are probably computing something, but NOT what you might
 be interested in), and are hence easily eliminated. I suspect that neurons
 might be doing much the same, as could formulaic implementations like (most)
 present AGI efforts. This might explain natural architecture and guide
 human architectural efforts.

 In short, instead of a pot of neurons, we might instead have a pot of
 dozens of types of neurons that each have their own complex rules regarding
 what other types of neurons they can connect to, and how they process
 information. Architecture might involve deciding how many of each type to
 provide, and what types to put adjacent to what other types, rather than the
 more detailed concept now usually thought to exist.

 Thanks for helping me wring my thought out here.

 Steve
 =
 On Sun, Jun 27, 2010 at 2:49 PM, Ben Goertzel b...@goertzel.org wrote:


 Hi Steve,

 A few comments...

 1)
 Nobody is trying to implement Hutter's AIXI design, it's a mathematical
 design intended as a proof of principle

 2)
 Within Hutter's framework, one calculates the shortest program that
 explains the data, where shortest is measured on Turing  machine M.
 Given a sufficient number of observations, the choice of M doesn't matter
 and AIXI will eventually learn any computable reward pattern.  However,
 choosing the right M can greatly accelerate learning.  In the case of a
 physical AGI system, choosing M to incorporate the correct laws of physics
 would obviously accelerate learning considerably.

 3)
 Many AGI designs try to incorporate prior understanding of the structure 
 properties of the physical world, in various ways.  I have a whole chapter
 on this in my forthcoming book on OpenCog  E.g. OpenCog's design
 includes a physics-engine, which is used directly and to aid with
 inferential extrapolations...

 So I agree with most of your points, but I don't find them original except
 in phrasing ;)

 ... ben


 On Sun, Jun 27, 2010 at 2:30 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 Ben, et al,

 *I think I may finally grok the fundamental misdirection that current
 AGI thinking has taken!

 *This is a bit subtle, and hence subject to misunderstanding. Therefore
 I will first attempt to explain what I see, WITHOUT so much trying to
 convince you (or anyone) that it is necessarily correct. Once I convey my
 vision, then let the chips fall where they may.

 On Sun, Jun 27, 2010 at 6:35 AM, Ben Goertzel b...@goertzel.org wrote:

 Hutter's AIXI for instance works [very roughly speaking] by choosing the
 most compact program that, based on historical data, would have yielded
 maximum reward


 ... and there it is! What did I see?

 Example applicable to the lengthy following discussion:
 1 - 2
 2 - 2
 3 - 2
 4 - 2
 5 - ?
 What is ?.

 Now, I'll tell you that the left column represents the distance along a
 4.5 unit long table, and the right column represents the distance above the
 floor that you will be as your walk the length of the table. Knowing this,
 without ANY supporting physical experience, I would guess ? to be zero, or
 maybe a little more if I were to step off of the table and land onto
 something lower, like the shoes that I left there.

 In an imaginary world where a GI boots up with a complete understanding
 of physics, etc., we wouldn't prefer the simplest program at all, but
 rather the simplest representation of the real world that is not
 physics/math *in*consistent with our observations. All observations
 would be presumed to be consistent with the response curves of our sensors,
 showing a world in which Newton's laws prevail, etc. Armed with these
 presumptions, our physics-complete AGI would look for the simplest set of
 *UN*observed phenomena that explained the observed phenomena. This
 theory of a 

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Steve Richfield
Ben,

On Sun, Jun 27, 2010 at 3:47 PM, Ben Goertzel b...@goertzel.org wrote:

  know what dimensional analysis is, but it would be great if you could give
 an example of how it's useful for everyday commonsense reasoning such as,
 say, a service robot might need to do to figure out how to clean a house...


How much detergent will it need to clean the floors? Hmmm, we need to know
ounces. We have the length and width of the floor, and the bottle says to
use 1 oz/M^2. How could we manipulate two M-dimensioned quantities and 1
oz/M^2 dimensioned quantity to get oz? The only way would seem to be to
multiply all three numbers together to get ounces. This WITHOUT
understanding things like surface area, utilization, etc.

Of course, throw in a few other available measures and it become REALLY easy
to come up with several wrong answers. This method does NOT avoid wrong
answers, it only provides a mechanism to have the right answer among them.

While this may be a challenge for dispensing detergent (especially if you
include the distance from the earth to the sun as one of your available
measures), it is little problem for learning.

I was more concerned with learning than with solving. I believe that
dimensional analysis could help learning a LOT, by maximally constraining
what is used as a basis for learning, without throwing the baby out with
the bathwater, i.e. applying so much constraint that a good solution can't
climb out of the process.

Steve


On Sun, Jun 27, 2010 at 6:43 PM, Steve Richfield
steve.richfi...@gmail.comwrote:

 Ben,

 What I saw as my central thesis is that propagating carefully conceived
 dimensionality information along with classical information could greatly
 improve the cognitive process, by FORCING reasonable physics WITHOUT having
 to understand (by present concepts of what understanding means) physics.
 Hutter was just a foil to explain my thought. Note again my comments
 regarding how physicists and astronomers understand some processes though
 dimensional analysis that involves NONE of the sorts of understanding
 that you might think necessary, yet can predictably come up with the right
 answers.

 Are you up on the basics of dimensional analysis? The reality is that it
 is quite imperfect, but is often able to yield a short list of answers,
 with the correct one being somewhere in the list. Usually, the wrong answers
 are wildly wrong (they are probably computing something, but NOT what you
 might be interested in), and are hence easily eliminated. I suspect that
 neurons might be doing much the same, as could formulaic implementations
 like (most) present AGI efforts. This might explain natural architecture
 and guide human architectural efforts.

 In short, instead of a pot of neurons, we might instead have a pot of
 dozens of types of neurons that each have their own complex rules regarding
 what other types of neurons they can connect to, and how they process
 information. Architecture might involve deciding how many of each type to
 provide, and what types to put adjacent to what other types, rather than the
 more detailed concept now usually thought to exist.

 Thanks for helping me wring my thought out here.

 Steve
 =
 On Sun, Jun 27, 2010 at 2:49 PM, Ben Goertzel b...@goertzel.org wrote:


 Hi Steve,

 A few comments...

 1)
 Nobody is trying to implement Hutter's AIXI design, it's a mathematical
 design intended as a proof of principle

 2)
 Within Hutter's framework, one calculates the shortest program that
 explains the data, where shortest is measured on Turing  machine M.
 Given a sufficient number of observations, the choice of M doesn't matter
 and AIXI will eventually learn any computable reward pattern.  However,
 choosing the right M can greatly accelerate learning.  In the case of a
 physical AGI system, choosing M to incorporate the correct laws of physics
 would obviously accelerate learning considerably.

 3)
 Many AGI designs try to incorporate prior understanding of the structure
  properties of the physical world, in various ways.  I have a whole chapter
 on this in my forthcoming book on OpenCog  E.g. OpenCog's design
 includes a physics-engine, which is used directly and to aid with
 inferential extrapolations...

 So I agree with most of your points, but I don't find them original
 except in phrasing ;)

 ... ben


 On Sun, Jun 27, 2010 at 2:30 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 Ben, et al,

 *I think I may finally grok the fundamental misdirection that current
 AGI thinking has taken!

 *This is a bit subtle, and hence subject to misunderstanding. Therefore
 I will first attempt to explain what I see, WITHOUT so much trying to
 convince you (or anyone) that it is necessarily correct. Once I convey my
 vision, then let the chips fall where they may.

 On Sun, Jun 27, 2010 at 6:35 AM, Ben Goertzel b...@goertzel.org wrote:

 Hutter's AIXI for instance works [very roughly speaking] by choosing
 the most 

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Ben Goertzel
On Sun, Jun 27, 2010 at 7:09 PM, Steve Richfield
steve.richfi...@gmail.comwrote:

 Ben,

 On Sun, Jun 27, 2010 at 3:47 PM, Ben Goertzel b...@goertzel.org wrote:

  know what dimensional analysis is, but it would be great if you could
 give an example of how it's useful for everyday commonsense reasoning such
 as, say, a service robot might need to do to figure out how to clean a
 house...


 How much detergent will it need to clean the floors? Hmmm, we need to know
 ounces. We have the length and width of the floor, and the bottle says to
 use 1 oz/M^2. How could we manipulate two M-dimensioned quantities and 1
 oz/M^2 dimensioned quantity to get oz? The only way would seem to be to
 multiply all three numbers together to get ounces. This WITHOUT
 understanding things like surface area, utilization, etc.



I think that the El Salvadorean maids who come to clean my house
occasionally, solve this problem without any dimensional analysis or any
quantitative reasoning at all...

Probably they solve it based on nearest-neighbor matching against past
experiences cleaning other dirty floors with water in similarly sized and
shaped buckets...

-- ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com