> In short, instead of a "pot of neurons", we might instead have a pot of
dozens of types of
> neurons that each have their own complex rules regarding what other types
of neurons they
> can connect to, and how they process information...

> ...there is plenty of evidence (from the slowness of evolution, the large
number (~200)
> of neuron types, etc.), that it is many-layered and quite complex...

The disconnect between the low-level neural hardware and the implementation
of algorithms that build conceptual spaces via dimensionality
reduction--which generally ignore facts such as the existence of different
types of neurons, the apparently hierarchical organization of neocortex,
etc.--seems significant. Have there been attempts to develop computational
models capable of LSA-style feats (e.g., constructing a vector space in
which words with similar meanings tend to be relatively close to each other)
that take into account basic facts about how neurons actually operate
(ideally in a more sophisticated way than the nodes of early connectionist
networks which, as we now know, are not particularly neuron-like at all)? If
so, I would love to know about them.


On Tue, Jun 29, 2010 at 3:02 PM, Ian Parker <ianpark...@gmail.com> wrote:

> The paper seems very similar in principle to LSA. What you need for a
> concept vector  (or position) is the application of LSA followed by K-Means
> which will give you your concept clusters.
>
> I would not knock Hutter too much. After all LSA reduces {primavera,
> mamanthal, salsa, resorte} to one word giving 2 bits saving on Hutter.
>
>
>   - Ian Parker
>
>
> On 29 June 2010 07:32, rob levy <r.p.l...@gmail.com> wrote:
>
>> Sorry, the link I included was invalid, this is what I meant:
>>
>>
>> http://www.geog.ucsb.edu/~raubal/Publications/RefConferences/ICSC_2009_AdamsRaubal_Camera-FINAL.pdf<http://www.geog.ucsb.edu/%7Eraubal/Publications/RefConferences/ICSC_2009_AdamsRaubal_Camera-FINAL.pdf>
>>
>>
>> On Tue, Jun 29, 2010 at 2:28 AM, rob levy <r.p.l...@gmail.com> wrote:
>>
>>> On Mon, Jun 28, 2010 at 5:23 PM, Steve Richfield <
>>> steve.richfi...@gmail.com> wrote:
>>>
>>>> Rob,
>>>>
>>>> I just LOVE opaque postings, because they identify people who see things
>>>> differently than I do. I'm not sure what you are saying here, so I'll make
>>>> some "random" responses to exhibit my ignorance and elicit more 
>>>> explanation.
>>>>
>>>>
>>> I think based on what you wrote, you understood (mostly) what I was
>>> trying to get across.  So I'm glad it was at least quasi-intelligible. :)
>>>
>>>
>>>>  It sounds like this is a finer measure than the "dimensionality" that I
>>>> was referencing. However, I don't see how to reduce anything as quantized 
>>>> as
>>>> dimensionality into finer measures. Can you say some more about this?
>>>>
>>>>
>>> I was just referencing Gardenfors' research program of "conceptual
>>> spaces" (I was intentionally vague about committing to this fully though
>>> because I don't necessarily think this is the whole answer).  Page 2 of this
>>> article summarizes it pretty succinctly: http://<http://goog_1627994790>
>>> www.geog.ucsb.edu/.../ICSC_2009_AdamsRaubal_Camera-FINAL.pdf
>>>
>>>
>>>
>>>> However, different people's brains, even the brains of identical twins,
>>>> have DIFFERENT mappings. This would seem to mandate experience-formed
>>>> topology.
>>>>
>>>>
>>>
>>> Yes definitely.
>>>
>>>
>>>>  Since these conceptual spaces that structure sensorimotor
>>>>> expectation/prediction (including in higher order embodied exploration of
>>>>> concepts I think) are multidimensional spaces, it seems likely that some
>>>>> kind of neural computation over these spaces must occur,
>>>>>
>>>>
>>>> I agree.
>>>>
>>>>
>>>>> though I wonder what it actually would be in terms of neurons, (and if
>>>>> that matters).
>>>>>
>>>>
>>>> I don't see any route to the answer except via neurons.
>>>>
>>>
>>> I agree this is true of natural intelligence, though maybe in modeling,
>>> the neural level can be shortcut to the topo map level without recourse to
>>> neural computation (use some more straightforward computation like matrix
>>> algebra instead).
>>>
>>> Rob
>>>
>>
>>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com>
>>
>
>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to