As I was trying the problem on practical example, I got stuck.. I would be very happy if anyone knows a way around it!
By reconstruction I mean: of set of active columns get most likely input that caused it. Is there such type of "reconstruction-inference" in OPF model so I could use it like eg in https://github.com/chetan51/linguist/blob/master/client/linguist.py#L92 ? >From the discussions above I think this is what (CLA)Classifier can solve. The Classifier.compute() method takes an True/False switch to do inference, so that is I guess what I want. But how do I call it when the classification param is my unknown?? You can see my usecase here: https://github.com/breznak/ALife/blob/master/alife/agents/SpatialPoolerAgent.py#L50 Basically I'm doing: SP.compute([posX, posY, action]) and the I want to query SP."infer"([0, 1, ???]) Compute() header: def compute(self, recordNum, patternNZ, classification, learn, infer): """ Process one input sample. This method is called by outer loop code outside the nupic-engine. We use this instead of the nupic engine compute() because our inputs and outputs aren't fixed size vectors of reals. Parameters: -------------------------------------------------------------------- recordNum: Record number of this input pattern. Record numbers should normally increase sequentially by 1 each time unless there are missing records in the dataset. Knowing this information insures that we don't get confused by missing records. patternNZ: list of the active indices from the output below classification: dict of the classification information: bucketIdx: index of the encoder bucket actValue: actual value going into the encoder learn: if true, learn this sample infer: if true, perform inference retval: dict containing inference results, there is one entry for each step in self.steps, where the key is the number of steps. and the value is an array containing the relative likelihood for each bucketIdx starting from bucketIdx 0. There is also an entry containing the average actual value to use for each bucket. The key is 'actualValues'. for example: {1 : [0.1, 0.3, 0.2, 0.7], 4 : [0.2, 0.4, 0.3, 0.5], 'actualValues': [1.5, 3,5, 5,5, 7.6], } """ I'll be really glad for any ideas how to (quickly) solve/overcome this problem! Many thanks, Mark On Mon, Nov 25, 2013 at 6:45 PM, Scott Purdy <[email protected]> wrote: > Reconstruction, as I understand, was used for prediction and had no effect > on the predicted or active cells. In other words, your results would be > exactly the same whether you used reconstruction or not. > > If we implement feedback in the core algorithms, we will probably want > something different. That said, reconstruction might still be useful for > some applications of NuPIC. > > > > On Fri, Nov 22, 2013 at 6:07 PM, Scott Purdy <[email protected]> wrote: > >> To be clear, when I say reconstruction in this email I am talking about >> the process of 1) taking the columns for each predicted cell in the TP and >> 2) selecting the connected input bits (possibly including the connectedness >> as a weight) to those corresponding SP coincidences/columns and 3) using >> the encoders to select the closest value to the selected input bits as the >> predicted value. >> > > 2->3 is where the juicy and interesting bits are. It would be good to pull > out a version that still had this implemented from git history and review. > There was also an exploration done by an intern that we could probably > review and release. > > >> On Fri, Nov 22, 2013 at 3:54 PM, Ian Danforth <[email protected]> >> wrote: >> >> > Scott, >> > >> > Reconstruction was implemented because it was much closer to biology >> than any classifier. >> >> I think your rationale here is that there is information propagation >> "downwards" in the brain and that this is what we are doing with >> reconstruction. That may be true, but the way that the information is used >> in reconstruction is different then the way it is used in feedback or other >> processes that actually happen in the brain. So it is true that information >> flows downward, but not true that the brain performs reconstruction with >> that information. >> > > I may be misunderstanding you, but that's demonstrably false. The Kanizsa > triangle is a great example ( > http://en.wikipedia.org/wiki/Illusory_contours) where the brain is using > a high level gestalt to reconstruct missing information. You see > activations in V1 from these figures *as if* the input were a complete > triangle. The feed back inhibitory pathway reduces the inhibitory signals > in a select group of neurons that it expects to be firing and then the > naturally noisy input is sufficient to cause actual feed forward input in > those areas, reinforcing the perception that there are lines/edges where > there are none. There are also cases where excitatory feedback drives lower > level activations which is exactly what reconstruction does. It > reconstructs missing parts of the input from whatever was fed in. > > Ian > > > _______________________________________________ > nupic mailing list > [email protected] > http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org > > > _______________________________________________ > nupic mailing list > [email protected] > http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org > > -- Marek Otahal :o)
_______________________________________________ nupic mailing list [email protected] http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
