Hi all, Thank you for all your comments and corrections. I try to answer to the below queries again. (I wish you guys would not hate me but give more corrections if I make mistakes...)
Hi Tim, If I have given you any negative impression by my explanation, that is not my intention. I deeply apologize for it. As I said in the last mail, I enjoyed to explore the beauty of SP. I'm pretty much sure that you will have a good time, too. I'm expecting to have even more enjoyable time when I play with TP, because TP is the very "unique feature" of CLA. Deep learning does not handle the concept of time, IIUC. 2013/8/22 Tim Boudreau <[email protected]> > Hi, Hideaki-san, > > Thanks for your response! Some comments inline: > (snip) > I figure that topology really should be conceptually separate - if you've > got an array of stuff, you can represent that as any number of dimensions; > it helps if n is an n-root of the number of elements, but if you control > the number of elements, that's easy. So I'd think you'd have an > abstraction for "topology" and when you want to find neighbors of a > cell/column, you'd "ask the topology". Otherwise you'd end up hard-coding > assumptions that might be difficult to rip out later. I find it easier to > think regions in terms of 2D layers, but that's what makes sense for a > human, not necessarily a computer. > I think you're right. I've seen the same discussion in another thread. We can linearize any dimensions to 1D. e.g. [x,y] to [y*width + x] So, I guess my question is: Does a *proximal* dendrite connect to more > than one bit? > I think so. To be more precise, one proximal dendrite segment has many synapses, and each synapse connects to one bit. If you have 1024 columns with 1024 synapses in your region, and all >> synapses are connected, you need to >> check 1 million links to see how strong each column is activated in SP. >> > > Yeah, this seems to be where the combinatorics go through the roof, and > where hardware could help; probably some clever optimizations are possible > to pre-determine a set that you don't need to check. > Right. It seems much concerns afloat around optimizations. I guess speed is everybody's interest. That is one of the things I like CLA about. CLA seems to focus on the balance between real Neuroscience and proper abstractions of it to get realistic performance on the contemporary computer architecture. If we bring too much details, it ends up like the below, though it has its own purpose and benefits. http://motherboard.vice.com/blog/for-one-second-a-supercomputer-mimicked-the-human-brain This is the sort of statement that makes me wonder if I'm understanding > things right. One *column* has many syapses, or one cell's dendrite has > many synapses? Or we are summing column.cells.activeSynapseCount? > I think both. One column has many synapses for feed-forward input, in addition to that one cell has many synapses for lateral inputs to predict future values. I was thinking of a 2D topology of columns, so you could refer to them as > 0,0 ... 0,1 ... 0,2 ... 1,0 ... etc. In that case, say with 4 cells per > column, the coordinates of one cell would be something like 0,1:3. > I think that is okay. So basically, when creating a new distal segment, just randomly pick some > cells "near" (in whatever topology) the cell it connects to? > For distal segment, I don't think we have concept of near or far. Candidates are all cells. This is mentioned "Implementation details and terminology" on Page 42 in the white paper. I haven't explored TP yet, so probably I should not answer about TP in depth... Have you watched the videos in the below page? Matt san? has collected various videos. http://numenta.org/media.html It helped me much to understand the white paper, with many good charts, motions, and explanations. ;) Best Regards, Hideaki Suzuki.
_______________________________________________ nupic mailing list [email protected] http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
