There is very little. Someone do research. Here is a paper on language
fitness.

http://kybele.psych.cornell.edu/~edelman/elcfinal.pdf

<http://kybele.psych.cornell.edu/~edelman/elcfinal.pdf>LSA is *not* discussed
nor is any fitness concept with the language itself. Similar sounding (or
written) words must be capable of disambiguation using LSA, otherwise the
language would be unfit. Let us have a *gedanken* language where "spring"
the example I have taken with my Spanish cannot be disambiguated. Suppose "*
spring*" meant "*step forward", *as well as its other meanings. If I am
learning to dance I do not think about "*primavera, resorte *or*
mamanthal"* but
I do think about "*salsa*". If I did not know whether I was to jump or put
my leg forward it would be extremely confusing. To my knowledge fitness in
this context has not been discussed.

In fact perhaps the only work that is relevant is my own which I posted here
some time ago. The reduction in entropy (compression) obtained with LSA was
disappointing. The different meanings (different words in Spanish & other
languages) are compressed more readily. Both Spanish and English have a
degree of fitness which (just possibly) is definable in LSA terms.


  - Ian Parker

On 7 July 2010 17:12, Gabriel Recchia <grecc...@gmail.com> wrote:

> > In short, instead of a "pot of neurons", we might instead have a pot of
> dozens of types of
> > neurons that each have their own complex rules regarding what other types
> of neurons they
> > can connect to, and how they process information...
>
> > ...there is plenty of evidence (from the slowness of evolution, the large
> number (~200)
> > of neuron types, etc.), that it is many-layered and quite complex...
>
> The disconnect between the low-level neural hardware and the implementation
> of algorithms that build conceptual spaces via dimensionality
> reduction--which generally ignore facts such as the existence of different
> types of neurons, the apparently hierarchical organization of neocortex,
> etc.--seems significant. Have there been attempts to develop computational
> models capable of LSA-style feats (e.g., constructing a vector space in
> which words with similar meanings tend to be relatively close to each other)
> that take into account basic facts about how neurons actually operate
> (ideally in a more sophisticated way than the nodes of early connectionist
> networks which, as we now know, are not particularly neuron-like at all)? If
> so, I would love to know about them.
>
>
> On Tue, Jun 29, 2010 at 3:02 PM, Ian Parker <ianpark...@gmail.com> wrote:
>
>> The paper seems very similar in principle to LSA. What you need for a
>> concept vector  (or position) is the application of LSA followed by K-Means
>> which will give you your concept clusters.
>>
>> I would not knock Hutter too much. After all LSA reduces {primavera,
>> mamanthal, salsa, resorte} to one word giving 2 bits saving on Hutter.
>>
>>
>>   - Ian Parker
>>
>>
>> On 29 June 2010 07:32, rob levy <r.p.l...@gmail.com> wrote:
>>
>>> Sorry, the link I included was invalid, this is what I meant:
>>>
>>>
>>> http://www.geog.ucsb.edu/~raubal/Publications/RefConferences/ICSC_2009_AdamsRaubal_Camera-FINAL.pdf<http://www.geog.ucsb.edu/%7Eraubal/Publications/RefConferences/ICSC_2009_AdamsRaubal_Camera-FINAL.pdf>
>>>
>>>
>>> On Tue, Jun 29, 2010 at 2:28 AM, rob levy <r.p.l...@gmail.com> wrote:
>>>
>>>> On Mon, Jun 28, 2010 at 5:23 PM, Steve Richfield <
>>>> steve.richfi...@gmail.com> wrote:
>>>>
>>>>> Rob,
>>>>>
>>>>> I just LOVE opaque postings, because they identify people who see
>>>>> things differently than I do. I'm not sure what you are saying here, so 
>>>>> I'll
>>>>> make some "random" responses to exhibit my ignorance and elicit more
>>>>> explanation.
>>>>>
>>>>>
>>>> I think based on what you wrote, you understood (mostly) what I was
>>>> trying to get across.  So I'm glad it was at least quasi-intelligible. :)
>>>>
>>>>
>>>>>  It sounds like this is a finer measure than the "dimensionality" that
>>>>> I was referencing. However, I don't see how to reduce anything as 
>>>>> quantized
>>>>> as dimensionality into finer measures. Can you say some more about this?
>>>>>
>>>>>
>>>> I was just referencing Gardenfors' research program of "conceptual
>>>> spaces" (I was intentionally vague about committing to this fully though
>>>> because I don't necessarily think this is the whole answer).  Page 2 of 
>>>> this
>>>> article summarizes it pretty succinctly: http://<http://goog_1627994790>
>>>> www.geog.ucsb.edu/.../ICSC_2009_AdamsRaubal_Camera-FINAL.pdf
>>>>
>>>>
>>>>
>>>>> However, different people's brains, even the brains of identical twins,
>>>>> have DIFFERENT mappings. This would seem to mandate experience-formed
>>>>> topology.
>>>>>
>>>>>
>>>>
>>>> Yes definitely.
>>>>
>>>>
>>>>>   Since these conceptual spaces that structure sensorimotor
>>>>>> expectation/prediction (including in higher order embodied exploration of
>>>>>> concepts I think) are multidimensional spaces, it seems likely that some
>>>>>> kind of neural computation over these spaces must occur,
>>>>>>
>>>>>
>>>>> I agree.
>>>>>
>>>>>
>>>>>> though I wonder what it actually would be in terms of neurons, (and if
>>>>>> that matters).
>>>>>>
>>>>>
>>>>> I don't see any route to the answer except via neurons.
>>>>>
>>>>
>>>> I agree this is true of natural intelligence, though maybe in modeling,
>>>> the neural level can be shortcut to the topo map level without recourse to
>>>> neural computation (use some more straightforward computation like matrix
>>>> algebra instead).
>>>>
>>>> Rob
>>>>
>>>
>>>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/> | 
>>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com>
>>
>
>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to