Korrelan,

Good. Interested to talk to you about this. A lot I agree with. But let me
just pick some specific points.

On Sun, Jun 30, 2019 at 5:00 PM korrelan <[email protected]> wrote:

> ...
>
> The external sensory cortex re-encodes incoming sensory streams by
> applying spatiotemporal compression
>

OK. Compression. That corresponds to my "repeated structure" suspicion.

I have something to say about compression. Long story short, I think we
need to think about cognitive patterns as an expansion not a compression.

It invalidates nothing I've seen with your current work. You talk about
patterns. I talk about patterns. I just may perhaps broaden what you think
of as a pattern. Move away from the idea a pattern is always a compression.

But there is nothing like an example. Rather talk in the abstract maybe I
can link a very nice talk, which expresses similar ideas in a different
domain.

Domas is talking about reverse engineering, not cognition. But finding
meaningful structure in computer code is a similar problem to cognition
when you think about it. I very much like what he does with binary for his
reverse engineering task. He doesn't compress. He expands:

Christopher Domas The future of RE Dynamic Binary Visualization
https://www.youtube.com/watch?v=4bM3Gut1hIk

The correspondence may not be obvious. If so there will be little point
arguing about it in the abstract. Hopefully I can interest you in a
concrete example. But I'm just throwing his talk on the off chance it
strikes a chord with you: this idea that cognition may be performing an
expansion of sensory input, by contrast with the idea that the search for
meaning must be a compression of sensory input.

For a concrete example maybe phonemes might be a place to start.

Phoneme model
>
> ...At this stage the phonemes will require a kind of wrapper; a program to
> interpret the connectomes output and then trigger the phonemes.
>

Right. So the problem is to distinguish your "parallel spatiotemporal spike
trains" so that the right phoneme is output at the right time.

In your words how to implement the "wrapper".

I'm not trying to challenge you on this. It just so happens I've been
thinking about ways to implement such a "wrapper" as exactly a "parallel
spatiotemporal spike train". It's a very close fit for what you say.

We can think about the wrapper as a compression of the spatiotemporal spike
train. I'm going to suggest we think about it as an expansion (for phonemes
the expansion is trivial, it just allows our phonemes to shift a bit. The
expansion idea really comes into its own when you move from phonemes to
words, and particularly words to sentences. It's very difficult/so far
proven impossible, to build a "wrapper" for sentences, in the form of a
compression.)

Actually it is better than just a wrapper to interface phonemes (and
finally sentences.) It is actually a wrapper which generates phonemes from
first principles. So it does not require any assumption of phonemes. It
predicts them. No more "botch'.

It just so happens I've been looking for a platform to try my theory
(schema?) for how to generate phonemes, using exactly the kind of parallel
spatiotemporal spike train you are talking about. Your platform seems
ideal. So if you don't want to try my schema, maybe you might be willing to
let me use your platform to try it myself.

-Rob


------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tf97c751029c2e4db-M7af197cc81c296036ddf1528
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to