Hi Fergal,

No problem. It was just insulting for 70th. I started at the end of that AI
wave. I worked primarily with images. For example, I made a 2D invariant
pattern recognition, 3D visual navigation system for the Mars rover, system
for cutting of rough diamonds. Now I'm back to the neurons, but the text is
used as images.

As for the evaluation of AI, I think that everything is going fine. But
people are nervous when it comes to their being. Really, there are many
social factors.

I am pleased that we have come to a close view about the CEPT. You'll get
such description in your NN, as a picture of synapses. You can also get the
syntactic and semantic frequency connections of words, if you can get it
out from the network. I know similar work in Russia, but not with neurons.

I do not work with large texts. Shakespeare's Sonnets - maximum.
My 2000 neurons network thinks why Socrates is mortal and about more long
reasoning.

Thanks for the link.

Regards,
Ivan


On: Date: Mon, 16 Sep 2013 13:22:46 +0100
From: Fergal Byrne <[email protected]>
To: "NuPIC general mailing list." <[email protected]>
Subject: Re: [nupic-dev] Subject: Re: HTM in Natural Language
        Processing
Message-ID:
        <cad2q5yftpyyo32e9hbamw1pijsr-k-j3mywg4jufpygj8a9...@mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

Hi Ivan,

I hope you don't interpret my comments as a criticism of you or your
thoughts on this. Quite the opposite. I'm trying to help steer the focus
towards a clearly shared understanding of what we're trying to achieve with
this exercise. I have very little experience of other approaches, and rely
heavily on reading up and having discussions with people who know far more
about it than I do!

I was deliberately being a little harsh on traditional AI; it's more a
cultural problem than anything, because the norm seems to be one of
announcing "breakthroughs" which turn out to have no general applicability.
That being said, I absolutely recognise their progress in understanding
cognition which has come from 50 years of hard work by some very smart
people, and I also appreciate the importance of having a computational
model of cognition which we can use to describe the phenomena we're
studying.

>From talking with Francisco (on and off this list - see the recent planning
video), it seems that the CEPT SDR's are indeed being generated using a
similar approach to that used in the SP. The sparse bitmaps we're seeing
are derived from a grayscale "image" which itself is a map of the semantic
values associated with each word. It may be worth exploring how this
process works in order to see how well it matches the needs of the CLA
(this will be part of the feedback and improvement process I mentioned in
the previous email).

Again, we have no idea what this setup is going to do. We have some hope
that it will achieve some kind of linguistic performance, but we'll most
likely have to do some evolution on it before we're sure we're onto
something.

If you read Pinker's discussions on this, it seems that there are a number
of areas (ie CLA's) in the brain which specialise in certain language
function, and that damage to one area often leaves the other functions
intact, while disrupting the connections between them seems to have more
complicated consequences. All these areas are being fed some kind of
semantic SDR's from other regions in the brain (see the beautiful images
and animations on http://www.neuroscienceblueprint.nih.gov/connectome/ for
examples of this plumbing).

Regards,

Fergal Byrne
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to