Just a clarification - I understand that there is the ability to turn training off once training is complete. I am specifically looking at the training phase. My question lies in the fact that real world data, when training, isn't always geared for optimization. Is seems like there would be many real world problems where, if you use the platform, would just yield the average prediction (i.e. the average of what everyone has been doing) rather than the optimal result. Is there a best practice for using nupic for optimization problems? Or does one have to first have an optimal data set to train with? Am I just missing something basic?
Thanks, Benjamin From: nupic [mailto:[email protected]] On Behalf Of Benjamin Robbins Sent: Monday, December 09, 2013 10:41 AM To: [email protected] Subject: [nupic-discuss] Positive Reinforcement Sorry if this is a newbie question. How does one reinforce a predicted result as being favorable given the evaluation criteria of a problem to the CLA? For example if I was going to use the platform to predict which road I should drive on when conditions are icy outside, how does the platform know which route is the best? The trouble I am having is that it would seem that lots of people take lots of stupid routes when it is icy out and would therefore make for convoluted data. My understanding is that all the bad routes would just reinforce bad predictions from the CLA if I fed it through. Just because lots of people take a certain route, doesn't mean it is the best route. In the same way, just because McDonalds has served billions of hamburgers doesn't mean they are the best hamburgers (or even the best food). How do you apply a value judgment on top of a prediction? Thanks, Benjamin
_______________________________________________ nupic mailing list [email protected] http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
