I think there should be a correlation between temperature(t-dt) and
tilt(t). When it warms up the bridge begins to expand but this takes time
(dt).

However, looking at the data it is hard to be sure of this. Here is
temperature 1 (red) and tilt 1 (green). The tilt seems to follow
temperature pretty immediately without a time lag. You could argue after a
temperature peak has been crossed, it takes a while for the tilt to
decrease again.After 600 time steps there is not much correlation between
the two until about 2000 time steps when they get back into synch. So the
correlation is far from simple.


[image: Inline image 1]
BTW, most of the temperature sensors 1-10 give similar readings (however,
even numbers are on the sunny side so tend to show slightly higher temp).
But the tilt sensors give different readings depending on location on the
bridge. Tilt1 and Tilt 6 give the biggest displacements since they are
located at the ends of the bridge. So probably Tilt 1 is the one to start
with if you want to consider only 1 tilt at first.





On Tue, Sep 30, 2014 at 8:40 PM, Matthew Taylor <[email protected]> wrote:

> On Tue, Sep 30, 2014 at 11:16 AM, John Blackburn <
> [email protected]> wrote:
>
>> Thanks very much, Matthew. You indeed have made good progress! Do you
>> think the model is currently attempting to infer future tilt from past
>> temperature?
>>
>> >> "and the model params it returned did not include any correlation
>> between Tilt_06 and any temperature inputs"
>> Do you mean the HTM swarming did not discover any correlation between
>> tilt and previous temperature? If so that is a problem...
>>
>
> Quoting Subutai:
>
> "Note that if temperature(t) is completely correlated with tilt(t), adding
> temperature(t) will not help, because the information is already available
> in tilt(t)."
>
> I think this may be the case.
>
>
>>
>> >> "but the model is not getting trained enough" This seems to be what we
>> observed as well. The HTM seemed to be constantly surprised by the new data
>> and anomaly score did not go down. If we switched off learning, the
>> predictions almost immediately became very poor meaning (I think) the model
>> was constantly having to learn each new sequence as it came along and was
>> not generalising the data. It never seemed to gain a real understanding
>> however long it was trained for.
>>
>
> I believe we I just need to do some more experiments with swarming and
> using different model parameters. If the params are wrong, the training
> won't help learning. More experimentation is needed.
>
> ---------
> Matt Taylor
> OS Community Flag-Bearer
> Numenta
>

Reply via email to