In Confabulation Theory, the analog to "models" being "mixed" is the
cortical patches correspond to what logicians call "properties" of the
object of attention, like "color", and the various states of the patches
correspond to facts regarding those properties, like "red".  In other
words, what consciousness researchers are referring to as specific "qualia"
correspond to specific states of cortical patches, mixed by dynamic
co-occurrence: Hebbian correlation structures both evolved and learned.  In
RHN's terminology, qualia comprise the "lexicon" of cognition and, like the
hard problem of lexical induction, it is both evolved (genetic) and learned
(memetic). The Hebbian correlation structure of qualia entails grammar but
is wholistic hence its grammar is statistical. The dynamics of this
structure refine the statistical grammar from and what emerges is a
semantic attractor space under the control of lower brain structures
governing salience including resource constraints that coordinate
musculature with attraction dynamics via the (CT-hypothesized) common
phylogenetic origin of muscles and and thalamacortical modules.

Anyway, I take it seriously enough that when I found I couldn't reproduce
the astounding confabulated next-sentence results reported by RNH, I
investigated and discovered I needed 64GB RAM, so I finally got the RAM and
a couple of NVIDIA 1070s to see if I could speed things up and convince
myself that CT is off-base or on the right track.  Of course, providing the
"evolved" models is going to be the hard part but I hope to be able to draw
on the results of Matt's benchmark winners for that.

On Sun, Oct 13, 2019 at 12:35 PM James Bowery <[email protected]> wrote:

> There is an enormous amount of redundancy in the abstract thalamocortical
> architecture evincing small Kolmogorov Complexity in description.  While I
> understand "the devil is in the details" of this evolved structure (not the
> least of which is the fact that it elides that the cerebellum's neuron
> count is a super-majority of the brain's), there seems to be a vast
> theoretic vacuum of the requisite simplicity.  It's the dog that didn't
> bark.
>
> That's why I take Hecht-Nielsen's Confabulation Theory seriously:  Not
> because I believe, as he did, that he "solved the problem of cognition",
> but rather that he has a first order approximation of the neocortex (indeed
> thalamcortical) structure -- at least one barking dog -- an _approach_.
> It's rather like a framework for compression like mixture of models, rather
> than the models themselves.
>
> On Sun, Oct 13, 2019 at 12:07 PM Matt Mahoney <[email protected]>
> wrote:
>
>>
>>
>> On Sun, Oct 13, 2019, 10:09 AM <[email protected]> wrote:
>>
>>> Isn't that massively inefficient? It'd take 100 times more
>>> storage/computation to do the same thing as a weighted net no?
>>>
>>
>> The neural models I use in the top ranked text compressors use a lot less
>> than 12-24 petaflops and a petabyte of RAM. But the language modeling is
>> rather rudamentary, nowhere near AGI. But I would be happy for you to prove
>> my estimate wrong.
>>
>> And one more thing. That's one human brain. To automate all labor, you
>> need several billion times that. Current technology uses about 1 megawatt
>> per petaflop. Maybe neuromorphic computing could get it down to 100 kW per
>> brain. Maybe economy of specialization could reduce it to 1 kW, which is
>> 50% of global energy production. But shrinking transistors alone won't do
>> it. If we can't do the optimization, it's going to take nanotechnology,
>> moving atoms instead of electrons. The brain uses 20 watts. It can be done.
>>
>>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
>> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
>> participants <https://agi.topicbox.com/groups/agi/members> + delivery
>> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
>> <https://agi.topicbox.com/groups/agi/Td4a5dff7d017676c-M1ad3ab5408288b2fa6edeff4>
>>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td4a5dff7d017676c-Md4ceabdc57c93407f346c587
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to