On Mon, Oct 28, 2019 at 1:48 AM <[email protected]> wrote:

> No I meant Word2Vec / Glove. They use a ex. 500 dimensional space to
> relate words to each other. If we look at just 3 dimensions with 10 dots
> (words) we can visualize how a word is in 3 superpositions entangled with
> other dots.
>

Pity. I thought you might be working on it yourself.

The ability to see, if not entanglement, then quantum superposition, as a
property of multi-dimensional word meaning representations, is something
that struck me about 1996. I didn't mention it much because it felt a bit
crackpot, but it was a pleasure to later find others working with the same
insight. I guess you know about the Quantum Interaction conference series
and associated work, e.g:

http://www.newscientist.com/article/mg21128285.900-quantum-minds-why-we-think-like-quarks.html

My current position is that it suggests there may be a more fundamental
description of physics in terms of distributed representations. QM looks
like the observation of some kind of distributed properties at a lower
level. (I later found support for this too. You might look at Robert
Laughlin's work for hints of this in contemporary physics.)

Given that, yes, maybe quantum computers can help with the computation of
language seen as a distributed representation problem. Maybe they can
implicitly use this lower level distributed representation to crunch
language/cognition problems, faster. But if they do I think it will be
because language/cognition is fundamentally a distributed representation,
network, problem, and we need to formulate cognition properly that way
first.

As a first step in that direction, yes, word2vec and GloVe are current
iterations of something which dates back 30 years or more (earlier Latent
Semantic Analysis...)

The world has more closely embraced this distributed representation idea
the last 10 years with the network AI (deep learning) revolution. But as
everyone knows that is mostly 30 year old ideas, made practical by GPU's.

More closely embracing this old network tradition has been good, but there
is still a trick missing.

Since that same 1996 I have thought the missing trick is to treat the
language/cognition problem as one of generating new patterns, rather than
limiting ourselves to learning patterns, in the style of current deep
learning, or indeed word2vec and GloVe. So instead of trying to find
factors in distributed representations the way word2vec and GloVe do,
simply resolving against dimensions as a kind of dot or scalar product (the
basis of deep learning), I suggested we might formulate cognition as
something productive, like a cross or vector product.

I haven't published much on this formally, but I do have a couple of
papers. One 20 years old, and the other a more technical update:

Freeman R. J., Example-based Complexity--Syntax and Semantics as the
Production of Ad-hoc Arrangements of Examples, Proceedings of the ANLP/NAACL
2000 Workshop on Syntactic and Semantic Complexity in Natural Language
Processing Systems, pp. 47-50.
http://www.aclweb.org/anthology/W00-0108

Parsing using a grammar of word association vectors
http://arxiv.org/abs/1403.2152

It works. You get meaningful structure from new patterns. But it seems we
need another level of computing power to make it practicable. Like the
GPU's which made the 30 year old, static, distributed representation ideas
practicable in the deep learning revolution. Quantum computing might
provide that power. But probably just massive parallelism is enough. I'm
waiting for some of these spiking network architectures, like Intel Loihi,
to escape into the wild to try it.

Anyway, yeah, I think you're right. There are strong parallels between
dimensional representations of language/cognition and QM.

Word2vec is a 30+ year old idea. But with a tweak, this kind of
distributed/network formulation may provide our answer.

I believe providing that tweak/trick will be the crucial step though. Maybe
understanding how that "trick", missing in 30+ year old word2vec style
formulations, relates to the power of cognition, will unlock the puzzle of
how to usefully program quantum computers too.

-Rob

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta664aad057469d5c-Mf9ea805562ffbcd12295f174
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to