OK, that makes sense Ben. So long as you have a clear picture of how to
progress the theory beyond temporary expediency, temporarily using the
state-of-the-art may be strategic.

So long as you are moving forward with some strong theoretical candidates
too. If we get trapped without theory, we're blind. There are too few
people with any broad theoretical vision for how to move forward. Too many
script kiddies just tweaking blindly, viz, the "important step" this thread
began with.

I'm encouraged that it now appears you are deconstructing grammar and
resolving it to a raw network level. That Linas is seeing the relevance of
maths like category theory, which is motivated by formal incompleteness,
speaks to this realization. (Though he may not be aware of the full import.)

Deep learning does not realize this. It does not realize that formal
description above the network level will be incomplete. I'm sure that is
the key theoretical failure holding it back. I wish there were more people
talking about it. If deep learning realized this they wouldn't still be
trying to "learn" representations, whether in intermediate layers or other.
(What was that article recently about the representation "bottle neck" idea
in deep learning needing to be revised?)

It's actually ironic that deep learning does not realize this idea that
formal description (above the network) must always be incomplete, because
it is also the key to the success of deep learning! The whole success of
distributed representation is due to this. The field moved to distributed
representation blindly, without theory, just because things started working
better that way! But you still see articles where people say no-one knows
why distributed representation works better! The failure of theoretical
vision is extraordinary.

But if you've deconstructed your dictionaries (throwing out your hand coded
dictionaries?) and arrived back at the level of observation in a sequence
network. And done it because of the theoretical realization that complete
representation above the network level is impossible (or was it just an
accident, trying to deconstruct symbolism to connectionism, and then
accidentally noticing the relevance to variational theories of maths?) Then
your group would be the only ones I've come across who have done (I think
the Oxford thread of variational formalization, around Coecke et al.
Grefenstette, were also seduced away by the short term effectiveness of
deep learning on GPUs.)

We need to keep (or get!) the theoretical vision.

Even given a vision of formal incompleteness, you (and Pissanetzky?) may
still be lacking a totally clear conception that the key problem is
assembling elements in new ways all the time.

Still, some focus on assembling elements in different ways (from a sequence
network) is encouraging. There is scope to move forward.

As a concrete, immediate, idea to explore moving forward, I hope you'll
look at the idea of using oscillations to structure your sequence network
representations. For it to be meaningful your networks will need to be
connected in ways which directly reflect the ideas behind embedding vectors
(without their linearities.) I don't know if that is true for your
networks. But given that, implementation should be simple, if practically
slow without parallel hardware.

-Rob

On Thu, Feb 21, 2019 at 12:03 AM Ben Goertzel <[email protected]> wrote:

> It's not that it's hard to feed data into OpenCog, whose
> representation capability is very flexible
>
> It's simply that deep NNs running on multi-GPU clusters can process
> massive amounts of text very very fast, and OpenCog's processing is
> much slower than that currently...

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T581199cf280badd7-M8a9e4f757c63064e69ab356b
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to