"why is my vision center in the same place as yours? Why does language tend
take root in roughly the same place in each person's brain? And why is it
that when someone does use an unusual area of the brain for a particular
type of task, they tend to show unusual abilities or deficits in that type
of task?"

Localism maybe a trend preserved by natural selection, but that does not
mean it's fundamental to how intelligence works.

Spend sometime in depth exploring Dario Nardi's "Neuroscience of
Personality" book, 16 personality types discovered by Carl Jung have now
been mapped via EEG technology at UCLA. The autistics brain is very
different, Temple Grandin has a very different brain for example. Brain
Plasticity demonstrates the non-locality of the brain, read the book "how
the brain changes itself" to potentially broaden your awareness of
discoveries that have been made, if you were not aware of them. You have
documented cases of people being born with have a brain, but being able to
see out of both eyes, and being able to function in all human ways, without
brainscans no one would know they had half a brain, it's not obvious. You
have cases where animals were surgically mutilated (cats) to have their
eyes basically plugged into their audio cortex, and their ears plugged into
their visual cortex, and guess what the brain figured it out and it worked.
The most likely and somewhat obvious explanation for why humans tend to
have vision in the same place, and hearing in the same place probably has
to do with locations of our eyes, and our ears, the preserved features
resulting from natural selection.

Dario Nardi's google talk
http://www.youtube.com/watch?v=MGfhQTbcqmA

On Mon, Nov 19, 2012 at 8:46 AM, Aaron Hosford <[email protected]> wrote:

> Like Hawkins, I used to follow the standard understanding of brain matter
> as "mind stuff": just put enough neurons together with the right local
> properties, add in a reward signal, and voila, here's a mind. It seemed
> obvious then that the mind was an emergent phenomenon that resulted from a
> substrate capable of deep learning, and that there were other substrates
> which could also work besides brain matter.
>
> I still see no reason to revise my intuition that other substrates can
> work, but Steven Pinker's "How the Mind Works" has convinced me that this
> substrate must be *structured* for a mind to emerge. I find myself in
> agreement with his assertion that there are mental organs or modules
> specifically tailored to accomplishing certain types of computation, and
> the broad-level architecture serves to bring those organs into synergy to
> accomplish intelligence.
>
> Evidence for this has been observed extensively in brain research, where
> specific areas of the brain have been associated with certain types of
> processing. If the brain is really just "mind stuff", a raw, unstructured
> collection of neurons which must learn from scratch, then why is my vision
> center in the same place as yours? Why does language tend take root in
> roughly the same place in each person's brain? And why is it that when
> someone does use an unusual area of the brain for a particular type of
> task, they tend to show unusual abilities or deficits in that type of task?
>
> Hawkins' approach will have limited success, not because he's on the wrong
> track, but because it's only part of the picture. If he does go on to
> create general intelligence, it will be by creating variants of his
> algorithm to learn/process different types of computational tasks more
> effectively and connecting them together in an architecture designed to
> synergize the capabilities of each.
>
>
>
> On Mon, Nov 19, 2012 at 2:51 AM, Ben Goertzel <[email protected]> wrote:
>
>>
>> Also, a more recent summary of why I think their view of the *brain* is
>> oversimplified is here:
>>
>> http://hplusmagazine.com/2012/07/20/how-the-brain-works/
>>
>> -- Ben G
>>
>>
>> On Mon, Nov 19, 2012 at 4:48 PM, Ben Goertzel <[email protected]> wrote:
>>
>>>
>>> I believe my response to Hawkins' book, written in 2004, still holds as
>>> a response to Kurzweil's book (with its similar theme) as well:
>>>
>>>
>>> http://www.goertzel.org/dynapsyc/2004/OnBiologicalAndDigitalIntelligence.htm
>>>
>>> -- Ben Goertzel
>>>
>>>
>>> On Mon, Nov 19, 2012 at 4:42 PM, Micah Blumberg <[email protected]>wrote:
>>>
>>>> YKY your approach does not personally make sense to me. Naturally I
>>>> hope you are successful. Have you read either "On Intelligence" by Jeff
>>>> Hawkins or "How to create a mind" by Ray Kurzweil. The latter book was
>>>> recently published. I suspect that many people trying to build AGI will be
>>>> able to consider a new approach to after reading it. Please let me know if
>>>> you are aware of that kind of AI and what advantages your planned system
>>>> may have over it.
>>>>
>>>>
>>>> On Sun, Nov 18, 2012 at 9:20 PM, Aaron Hosford <[email protected]>wrote:
>>>>
>>>>> My matrix representation is based on the observation that AB != BA in
>>>>>> the composition of concepts, for example "john loves mary != mary loves
>>>>>> john".  I'm still trying to determine if the matrix representation is 
>>>>>> sound
>>>>>> and consistent with our common-sense view of concepts, but it looks
>>>>>> promising...
>>>>>
>>>>>
>>>>> I may be jumping to conclusions, but it sounds like what you're doing
>>>>> is placing A/"john", B/"mary", etc. as labels for both rows and columns,
>>>>> where the row label represent the first value and the column represents 
>>>>> the
>>>>> second (or maybe vice versa), so that you can represent "john loves mary"
>>>>> by placing a nonzero value in entry indexed by row "john" and column
>>>>> "mary". The most obvious interpretation for this nonzero value would be 
>>>>> the
>>>>> probability that "john loves mary" is true, or it could possibly be
>>>>> certainty or some other similar measure.
>>>>>
>>>>> As you pointed out, this is a nice generalization of a basic adjacency
>>>>> matrix, with nonzero entries marking adjacencies (directed links or edges)
>>>>> in the conceptual graph. Matrices of this form could be mapped to directed
>>>>> graphs with links not only labeled "loves", but having a secondary label
>>>>> indicating the associated probability or certainty value of the link. In 
>>>>> my
>>>>> own system, I keep my internal representation directly in terms of nodes &
>>>>> links rather than such a generalized adjacency matrix, but the specific
>>>>> data structure is really only an implementation detail, not intrinsic to
>>>>> the structure of the information being stored. We are basically doing the
>>>>> same thing, in other words, provided my understanding of your
>>>>> representational method is correct.
>>>>>
>>>>> I'm still trying to determine if the matrix representation is sound
>>>>>> and consistent with our common-sense view of concepts, but it looks
>>>>>> promising...
>>>>>
>>>>>
>>>>> I have spent a lot of time thinking about this problem, albeit in
>>>>> terms of semantic nets rather than the matrices that correspond to them. I
>>>>> think your approach is consistent with how we internally work with
>>>>> concepts, but it is probably incomplete. For example, what happens when 
>>>>> you
>>>>> want to represent the probability or certainty that "john" is the one
>>>>> participating as the subject in this "loves" relationship independently
>>>>> from the probability/certainty that "mary" is the direct object? Or what 
>>>>> if
>>>>> you have lower probability/certainty for the classification of the
>>>>> relationship as "loves" (as opposed to say, "hates") than you do for the
>>>>> individuals filling the subject and direct object roles? It seems now that
>>>>> you need 3 probability/certainty numbers for each position in the matrix,
>>>>> one for subject, one for direct object, and one for relationship type.
>>>>>
>>>>> Instead of multiple values in a single cell of the matrix -- or rather
>>>>> 3 parallel matrices with the same row and column labels -- it seems more
>>>>> natural to me to make the clause/predicate/act/relationship a separate
>>>>> value in itself, so the original matrix is decomposed into (1) a "subject"
>>>>> matrix, mapping the probability/certainty of "john" (among other people)
>>>>> participating in the clause as the subject, (2) a "direct object" matrix,
>>>>> mapping the probability/certainty of "mary" (among other people)
>>>>> participating in the clause as the direct object, and (3) a "predicate"
>>>>> matrix, mapping the probability/certainty that the clause's predicate is
>>>>> "loves" (among other possible predicates). Each of these different
>>>>> matrices, "subject", "direct object", and "predicate", maps to a link 
>>>>> label
>>>>> in the corresponding directed graph, and each link receives its own
>>>>> probability/certainty value independent of the others. The 
>>>>> representational
>>>>> advantage of making the clause itself a row/column label only grows when
>>>>> prepositional phrases, adverbs, indirect objects, and other clausal
>>>>> modifiers start to come into play, and the utility of your matrix trick is
>>>>> preserved by simply adding a new matrix for each constituent role in the
>>>>> clause.
>>>>>
>>>>>
>>>>>
>>>>> On Sun, Nov 18, 2012 at 8:10 PM, YKY (Yan King Yin, 甄景贤) <
>>>>> [email protected]> wrote:
>>>>>
>>>>>> On Sat, Nov 10, 2012 at 3:44 AM, Aaron Hosford 
>>>>>> <[email protected]>wrote:
>>>>>>
>>>>>>> My "matrix trick" may be able to map propositions to a
>>>>>>>> high-dimensional vector space, but my assumption that concepts are 
>>>>>>>> matrices
>>>>>>>> (that are also rotations in space) may be unjustified.  I need to find 
>>>>>>>> a
>>>>>>>> set of criteria for matrices to represent concepts faithfully.  This
>>>>>>>> direction is still hopeful.
>>>>>>>
>>>>>>>
>>>>>>> I'm curious what your "matrix trick" is. Are you familiar with
>>>>>>> adjacency matrices? They're the simplest and most common way of
>>>>>>> representing directed graphs as matrices.
>>>>>>> http://en.wikipedia.org/wiki/Adjacency_matrix Semantic nets
>>>>>>> typically have sparse adjacency matrices, and there are a lot of good
>>>>>>> algorithms and libraries out there for efficiently representing and
>>>>>>> manipulating sparse matrices.
>>>>>>>
>>>>>>
>>>>>>
>>>>>> Sorry, almost missed your post.  My method is to *represent* logical
>>>>>> atoms as square matrices with real entries.  This allows me to convert 
>>>>>> the
>>>>>> matrices to vectors (by simply flattening the matrices) and calculate the
>>>>>> distances (ie similarities) between them via the Euclidean norm.
>>>>>>
>>>>>> My matrices are different from adjacency matrices which has integer
>>>>>> entries.  The continuous values potentially allow learning via 
>>>>>> propagation
>>>>>> of errors similar to back-propagation for neural nets.
>>>>>>
>>>>>> My matrix representation is based on the observation that AB != BA in
>>>>>> the composition of concepts, for example "john loves mary != mary loves
>>>>>> john".  I'm still trying to determine if the matrix representation is 
>>>>>> sound
>>>>>> and consistent with our common-sense view of concepts, but it looks
>>>>>> promising...
>>>>>>
>>>>>> YKY
>>>>>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>>>> <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> |
>>>>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>>>>> <http://www.listbox.com>
>>>>>>
>>>>>
>>>>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>>> <https://www.listbox.com/member/archive/rss/303/23601407-ccf7ca1d> |
>>>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>>>> <http://www.listbox.com>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> ~~
>>>> Warmly,
>>>>
>>>>
>>>> Micah
>>>> 7 1 4 ) 6 9 9 - 4 2 1 3 (voicemail and texting same digits)
>>>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>> <https://www.listbox.com/member/archive/rss/303/212726-c2d57280> |
>>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>>> <http://www.listbox.com>
>>>>
>>>
>>>
>>>
>>> --
>>> Ben Goertzel, PhD
>>> http://goertzel.org
>>>
>>> "My humanity is a constant self-overcoming" -- Friedrich Nietzsche
>>>
>>>
>>
>>
>> --
>> Ben Goertzel, PhD
>> http://goertzel.org
>>
>> "My humanity is a constant self-overcoming" -- Friedrich Nietzsche
>>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23601407-ccf7ca1d> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 

~~
Warmly,


Micah
7 1 4 ) 6 9 9 - 4 2 1 3 (voicemail and texting same digits)



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to