On Sat, Apr 19, 2014 at 1:47 AM, Jim Bromer <[email protected]> wrote:

> I spent some time reading the links your provided about self-organized
> criticality and the use of this concept as a fundamental theory of mind. It
> was very interesting but the simplicity of the idea is something that I
> don't accept as a reasonable principle of mind. The active principles of
> mind may one day be defined using a constrained list of methods where
> complexity emerges but that does not mean that the concepts can be
> accurately defined with a few principles.  My point of view is that a
> computer program can use the principles of referential relations as the
> fundamental implementation of artificial intelligence and these might
> be described using a constrained list but the conceptual references have to
> have the potential of great complexity and this complexity has to be based
> on the complexity of interactions between referential concepts not the
> simplicity (or the relative simplicity) of the underlying principles of
> representation.
> Jim Bromer
>

Ok, I tend to agree. Notice however that self-organized criticality does
seem to have the ability to explain unbound complexification. However,
human minds are native to the system were cultural complexification
happened, and AGIs might fact a tougher challenge because they are, in a
sense, aliens that we are trying to invite into our cultural environment.
So even if some simple algorithm explains a lot about the human mind, this
is not necessarily sufficient to build an AGI, much less the type of AGI
that we might desire.

Best,
Telmo.


>
> Jim Bromer
>
>
> On Thu, Apr 17, 2014 at 11:05 AM, Telmo Menezes <[email protected]> wrote:
>
>> Hi Jim,
>>
>> This reminds me of Self-organised criticality and Per Bak's simple idea
>> on how it could be applied to learning in neural networks.
>>
>> https://en.wikipedia.org/wiki/Self-organized_criticality
>>
>>
>> https://www.simonsfoundation.org/quanta/20140403-a-fundamental-theory-to-model-the-mind/
>>
>> I find the simplicity of the idea very attractive. I doubt that it is
>> enough, but I wouldn't be surprised if it ends up playing a central role in
>> AGI.
>>
>> Best,
>> Telmo.
>>
>>
>> On Thu, Apr 17, 2014 at 3:31 PM, Jim Bromer <[email protected]> wrote:
>>
>>> There is a lot of evidence that humans, like other animals, learn
>>> incrementally. However, my belief is that because we use ideas in different
>>> ways a new idea can interact with other ideas. There are moments when
>>> something that is learned incrementally can be leveraged to produce leaps
>>> of insight. I call this knowledge structural because it means that an idea
>>> can suddenly provide some greater structure to knowledge related to a
>>> particular subject. The new increment of knowledge that triggers the
>>> structural insight may or may not be the key that provides the leverage of
>>> the structure. It may be that some new piece of knowledge just helps to
>>> crystalize some structure in a way that helps the learner to better utilize
>>> other knowledge.
>>>
>>> In programming and computational mathematics we find distinctions
>>> between things like operators and operands and you have to be able to find
>>> distinctions between other different parts of a computation if you want to
>>> use mathematics creatively. However, I think it is obvious that the
>>> situation is more dynamic and more fluid in thought. Some information may
>>> play some role based on some other information so that it can react with
>>> some other information and we just cannot categorize how some piece of
>>> information might be used before hand.  An AGI program has to be able to
>>> find how information can work together to create greater structures of
>>> knowledge. But for this to happen, the program has to be designed to
>>> provide the structure that will ensure that the potential to build learned
>>> structures is there.
>>> Jim Bromer
>>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/25129130-ee4f7d55> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/24379807-f5817f28> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/25129130-ee4f7d55> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to