I'd look at it as an average of a bunch of individual feature prediction
clusters. If I see a bunch of sensor models (feeding on the same data
stream), swarmed and tuned for different features (light reflection,
thermal imaging, etc..) that all average out to some sort of ice, or other
undesirable, I have a hunch it's icy.

You could think of it as "if I had a drone going down each route, and I was
watching telemetry for every drone. What features would I average together
in my head to say 'ice!'" then swarm for each feature to get your model
parameters, and take the average of a set of models (built with said
parameters) that are all watching for that feature. Some features would be
more expensive to predict on than others, and you can cluster accordingly.

This is my current mad scientist thinking that I'm applying to a similar
problem for recognizing individuals (though, with decidedly less noise).
We'll see how far I get while on vacation and I'll post the results.


On Mon, Dec 9, 2013 at 4:13 PM, Benjamin Robbins <[email protected]>wrote:

>  Just a clarification – I understand that there is the ability to turn
> training off once training is complete. I am specifically looking at the
> training phase. My question lies in the fact that real world data, when
> training, isn’t always geared for optimization. Is seems like there would
> be many real world problems where, if you use the platform, would just
> yield the average prediction (i.e. the average of what everyone has been
> doing) rather than the optimal result. Is there a best practice for using
> nupic for optimization problems? Or does one have to first have an optimal
> data set to train with? Am I just missing something basic?
>
>
>
> Thanks,
> Benjamin
>
>
>
> *From:* nupic [mailto:[email protected]] * On Behalf Of 
> *Benjamin
> Robbins
> *Sent:* Monday, December 09, 2013 10:41 AM
> *To:* [email protected]
> *Subject:* [nupic-discuss] Positive Reinforcement
>
>
>
> Sorry if this is a newbie question. How does one reinforce a predicted
> result as being favorable given the evaluation criteria of a problem to the
> CLA? For example if I was going to use the platform to predict which road I
> should drive on when conditions are icy outside, how does the platform know
> which route is the best? The trouble I am having is that it would seem that
> lots of people take lots of stupid routes when it is icy out and would
> therefore make for convoluted data. My understanding is that all the bad
> routes would just reinforce bad predictions from the CLA if I fed it
> through.  Just because lots of people take a certain route, doesn’t mean it
> is the best route. In the same way, just because McDonalds has served
> billions of hamburgers doesn’t mean they are the best hamburgers (or even
> the best food). How do you apply a value judgment on top of a prediction?
>
>
>
> Thanks,
>
> Benjamin
>
> _______________________________________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>
>
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to