Hey all, 

Got a few questions after going through Matt’s tutorials (very informative btw, 
thanks Matt!). 

Why does prediction lag instead of lead until a large number of samples has 
been processed? I remember reading about that in the ML and it having something 
to do with HTM passing through observed values as-is when it can’t predict 
well. Can someone elaborate on that please both in terms of how its implemented 
and the rationale behind it? It tends to produce very misleading plots 
especially when the anomaly score isn’t usually high enough to indicate the 
pass-through events.
Why is the timestamp included as an encoded field and passed to the network in 
the gym example? Is it processed in the same way as the consumption field or is 
it only used to align predictions with their corresponding inputs? For cases 
with uniform sampling (like the sine example), can we simply ignore that field 
and only encode the equivalent of the consumption field?
Is there a relationship between anomaly score/anomaly likelihood and prediction 
error (= prediction - ground truth)? Or is the anomaly an indication of HTM’s 
confusion/certainty rather than its predictive accuracy?

Thank you for any hints. Much appreciated :). 
best,
Nick

Reply via email to