Vlad,
Thanks for your below reply to my prior email of Tue 10/21/2008 7:08 PM
I agreed with most of your reply. There are only two major issues upon
which I wanted further confirmation, clarification, or comment.
1. WHY C(N,S) IS DIVIDED BY T(N,S,O) TO FORM A LOWER BOUNDS FOR
Ben,
In my email starting this thread on 10/15/08 7:41pm I pointed out that a
more sophisticated version of the algorithm would have to take connection
weights into account in determining cross talk, as you have suggested below.
But I asked for the answer to a more simple version of the
makes sense, yep...
i guess my intuition is that there are obviously a huge number of
assemblies, so that the number of assemblies is not the hard part, the hard
part lies in the weights...
On Tue, Oct 21, 2008 at 11:18 AM, Ed Porter [EMAIL PROTECTED] wrote:
Ben,
In my email starting this
Vlad,
Thanks. In respone to your email I tried plugging different values into the
Excel spread sheet I sent by a prior email under this subject line, and, and
low and behold, got some interesting answers for the number A of assemblies
(or sets) of nodes of uniform size S you can create from N
C(N,S) is the total number of assemblies of size S that fit in the N
nodes, if you forget about overlaps.
Each assembly overlaps in X places with other C(S,X)*C(N-S,S-X)
assemblies: if another assembly overlaps with our assembly in X
places, then X nodes are inside S nodes of our assembly, which
Ben,
You're right. Although one might seem to be getting a free lunch in terms
of being able to create more assemblies than the number of nodes from which
they are created, it would appear that the extra number of links required
not only for auto-associative activation withn an assembly, but
Vlad,
Thanks for your below reply of Tue 10/21/2008 2:17 PM.
I have spend hours trying to understand your explanation, and I now think I
understand much of it, but not all of it. I have copied much of it word for
word below and have inserted my questions about its various portions.
Ben,
Upon thinking more about my comments below, in an architecture such as the
brain where connections are much cheaper (at least more common) than nodes,
cell assemblies might make sense.
This is particularly true since one could develop tricks to reduce the
number of links that would
(I agree with the points I don't quote here)
General reiteration on notation: O-1 is the maximum allowed overlap,
overlap of O is already not allowed (it was this way in your first
message).
On Wed, Oct 22, 2008 at 3:08 AM, Ed Porter [EMAIL PROTECTED] wrote:
T(N,S,O) = SUM FROM X = 0 TO S-O OF
Thanks to Ben and Vlad for their help answering my question about how to
estimate the number of node assemblies A(N,O,S) one can get from a total set
of N nodes, where each assembly has a size of S, and a maximum overlap with
any other set of O. I am sorry I did not response sooner but I spend a
On Mon, Oct 20, 2008 at 6:37 PM, Ed Porter [EMAIL PROTECTED] wrote:
The tables at http://www.research.att.com/~njas/codes/Andw/index.html#dist16
indicates the number of cell assemblies would, in fact be much larger than
the number of nodes, WHERE THE OVERLAP WAS RELATIVELY LARGE, which would
On Mon, Oct 20, 2008 at 12:07 PM, Ed Porter [EMAIL PROTECTED] wrote:
As I said in my last email, since the Wikipedia article on constant
weight codes said APART FROM SOME TRIVIAL OBSERVATIONS, IT IS GENERALLY
IMPOSSIBLE TO COMPUTE THESE NUMBERS IN A STRAIGHTFORWARD WAY. And since all
of the
I also don't understand whether A(n,d,w) is the number of sets where the
hamming distance is exactly d (as it would seem from the text of
http://en.wikipedia.org/wiki/Constant-weight_code ), or whether it is the
number of set where the hamming distance is d or less. If the former case
is true
Wait, now I'm confused.
I think I misunderstood your question.
Bounded-weight codes correspond to the case where the assemblies themselves
can have n or fewer neurons, rather than exactly n.
Constant-weight codes correspond to assemblies with exactly n neurons.
A complication btw is that an
Ben,
I am interested in exactly the case where individual nodes partake in
multiple attractors,
I use the notation A(N,O,S) which is similar to the A(n,d,w) formula of
constant weight codes, except as Vlad says you would plug my varaiables into
the constant weight formula buy using A(N,
But, suppose you have two assemblies A and B, which have nA and nB neurons
respectively, and which overlap in O neurons...
It seems that the system's capability to distinguish A from B is going to
depend on the specific **weight matrix** of the synapses inside the
assemblies A and B, not just on
On Tue, Oct 21, 2008 at 12:07 AM, Ed Porter [EMAIL PROTECTED] wrote:
I built an excel spread sheet to calculate this for various values of N,S,
and O. But when O = zero, the value of C(N,S)/T(N,S,O) doesn't make sense
for most values of N and S. For example if N = 100 and S = 10, and O =
Ben Goertzel wrote on Wednesday, October 15, 2008 7:57 PM
Is the other node assembly B fixed? So you're asking how many assemblies
of size S will have less than O nodes overlap with some specific node
assembly B with size S?
[Ed Porter]
Ben,
If I understand your above quoted
Eric,
Actually I am looking for a function A =f(N,S,O).
If one leaves out the O, and merely wants to find the number of
subcombinations of size S that can be formed from a population of size N,
just apply the standard formula for combinations. But adding the limitation
that none of the
Matt,
From a brief glace at your formula, it seems like it would be more likely to
apply to a system in which each node is in only one cell assembly. This
makes the math much more simple, but it fails to take advantage of the main
advantages of cell assemblies, such as: possibly allowing many
OK, I see what you're asking now
I think some bounds on the number you're looking for, are given by some
classical combinatorial theorems, such as you may find in
http://www.math.ucla.edu/~bsudakov/cross-*intersections*.pdf
(take their set L to consist of {0,...,O} ... and set A_1 = A_2), and
Ben,
Thanks. I spent about an hour trying to understand this paper, and, from my
limited reading and understanding, it was not clear it would answer my
question, even if I took the time that would be necessary to understand it,
although it clearly was in the same field of inquiry.
I am pretty sure their formulas give bounds on the number you want, but not
an exact calculation...
Sorry the terminology is a pain! At some later time I can dig into this
for you but this week I'm swamped w/ practical stuff...
On Thu, Oct 16, 2008 at 2:35 PM, Ed Porter [EMAIL PROTECTED]
On Thu, Oct 16, 2008 at 7:01 PM, Ed Porter [EMAIL PROTECTED] wrote:
The answer to this question would provide a rough indication of the
representational capacity of using node assemblies to represent concepts vs
using separate individual node, for a given number of nodes. Some people
claim
Thanks, for offering to look into this.. The bounds I saw were upper bounds
(such as just the number of possible combinations), and I was more
interested in lower bounds.
-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 16, 2008 2:45 PM
To:
Vlad,
They could be used much like normal nodes, except that a given set of basic
nodes that form a conceptual node would be auto-associative within their own
population, and they would have some of the benefits of redundancy,
robustness, resistance to noise, and gradual forgetting, that I
Vlad,
If, as you below post indicates, it is easy to prove an example of how large
the ratio of the number of cell assemblies with say less than 5% overlap
with the population of any other assembly is compared to the number of nodes
out of which such cell assemblies are made --- could you provide
On Fri, Oct 17, 2008 at 12:46 AM, Ed Porter [EMAIL PROTECTED] wrote:
Vlad,
They could be used much like normal nodes, except that a given set of basic
nodes that form a conceptual node would be auto-associative within their own
population, and they would have some of the benefits of
Ed,
After a little more thought, it occurred to me that this problem was already
solved in coding theory ... just take the bound given here, with q=2:
http://en.wikipedia.org/wiki/Hamming_bound
The bound is achievable using Hamming codes (linked to from that page), so
it's realizable.
What
They also note that according to their experiments, bounded-weight codes
don't offer much improvement over constant-weight codes, for which
analytical results *are* available... and for which lower bounds are given
at
http://www.research.att.com/~njas/codes/Andw/
ben
On Thu, Oct 16, 2008 at
However, it's noteworthy that Hopfield nets and other ANN models generally
have memory capacity far below what error-correcting-code theory would
suggest is possible.
So, these bounds are not really that useful, because they don't seem to
correspond to realistic incremental learning methods
One more addition...
Actually the Hamming-code problem is not exactly the same as your problem
because it does not place an arbitrary limit on the size of the cell
assembly... oops
But I'm not sure why this limit is relevant, since cell assemblies in the
brain could be very large
Anyway, it
I think A = floor((N-O)/(S-O)) * C(N,O) / (O+1).
Charles Griffiths
--- On Wed, 10/15/08, Ed Porter [EMAIL PROTECTED] wrote:
From: Ed Porter [EMAIL PROTECTED]
Subject: [agi] Who is smart enough to answer this question?
To: agi@v2.listbox.com
Date: Wednesday, October 15, 2008, 4:40 PM
Is
On Fri, Oct 17, 2008 at 5:04 AM, charles griffiths
[EMAIL PROTECTED] wrote:
I think A = floor((N-O)/(S-O)) * C(N,O) / (O+1).
Doesn't work for O=2 and S=2 where A=C(N,2).
P.S. Is it a normal order to write arguments of C(,) this way? I used
the opposite.
P.P.S. In original problem, O-1 is the
You're right. In A = floor((N-O)/(S-O)) * C(N,O) / (O+1), O is the maximum
overlap.
--- On Thu, 10/16/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
From: Vladimir Nesov [EMAIL PROTECTED]
Subject: Re: [agi] Who is smart enough to answer this question?
To: agi@v2.listbox.com
Date: Thursday,
On Fri, Oct 17, 2008 at 5:31 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
I still think this combinatorics problem is identical to the problem of
calculating the efficiency of bounded-weight binary codes, as I explained
in a prior email...
Yes, it seems to be a well-known problem.
Right, but his problem is equivalent to bounded-weight, not constant-weight
codes...
On Thu, Oct 16, 2008 at 10:04 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Fri, Oct 17, 2008 at 5:31 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
I still think this combinatorics problem is identical to the
On Fri, Oct 17, 2008 at 6:05 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
Right, but his problem is equivalent to bounded-weight, not constant-weight
codes...
Why? Bounded-weight codes are upper-bounded by Hamming weight, which
corresponds to cell assemblies having size of S or less, whereas in
Oh, you're right...
I was mentally translating his problem into one that made more sense to me
biologically, as I see no reason why one would assume all cell assemblies to
have a fixed size ... but it makes slightly more sense to assume an upper
bound on their size...
ben
On Thu, Oct 16, 2008
On Fri, Oct 17, 2008 at 6:26 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
Oh, you're right...
I was mentally translating his problem into one that made more sense to me
biologically, as I see no reason why one would assume all cell assemblies to
have a fixed size ... but it makes slightly more
Well, coding theory does let you derive upper bounds on the memory capacity
of Hopfield-net type memory models...
But, the real issue for Hopfield nets is not theoretical memory capacity,
it's tractable incremental learning algorithms
Along those lines, this work is really nice...
Is anybody on this list smart and/or knowledgeable enough to come up with a
formula for the following (I am not):
Given N neural net nodes, what is the number A of unique node assemblies
(i.e., separate subsets of N) of size S that can have less than O
overlapping nodes, with the population of
On Wed, Oct 15, 2008 at 7:40 PM, Ed Porter [EMAIL PROTECTED] wrote:
Is anybody on this list smart and/or knowledgeable enough to come up with a
formula for the following (I am not):
Given N neural net nodes, what is the number A of unique node assemblies
(i.e., separate subsets of N) of size
Is anybody on this list smart and/or knowledgeable enough to come up with a
formula for the following (I am not):
I don't think I'm the person to answer this for you. But I do have
some insights.
Given N neural net nodes, what is the number A of unique node assemblies
(i.e., separate subsets of
--- On Wed, 10/15/08, Ed Porter [EMAIL PROTECTED] wrote:
Given N neural net nodes, what is the
number A of unique node assemblies
(i.e., separate subsets of N) of size
S that can have less than O
overlapping nodes, with the population
of any other such node assembly
similarly selected from
Even if he wants fault tolerance (from cell damage) through redundancy?
=
Rafael C.P.
=
On Wed, Oct 15, 2008 at 9:06 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Wed, 10/15/08, Ed Porter [EMAIL PROTECTED] wrote:
Given N neural net nodes, what is the
number A of unique
Even if he wants fault tolerance (from cell damage) through redundancy?
Why model neuron attrition? These kinds of calculations are normally
done in production mode, that is, within computing setups not prone to
component failure. Maybe you're thinking of neural nets that map onto
a large number
47 matches
Mail list logo