> A more natural way seems to me: predict what state the PRS is in the next
> time a prediction failure (of the base level) happens. The PRS can be seen
> as containing all the information about the future (also future PRSes)
> until a prediction failure. Apparantly the environment then switches to
> some new unforseen behaviour. Until then the environment did not contain
> any new information; it can be compressed to the PRS (and the Abstraction
> and Prediction neural networks which are fixed during run time).

A correction:
The Prediction neural network computes the predicted next input on basis
of the most recent input also (Pr: C, I -> Ipred). Therefor, even if the
prediction succeeds, it can be that the PRS does not contain all the
information to construct the future input.
The sequence untill a prediction failure is compressible to the PSR plus the 
present input. Because of this the PSR can be viewed/operate as an abstract 
classification without detail information (which is stored in the present 
input I) (although the PSR can also hold the detail information in very 
simple environments) , so it's not an unimportant point.

second:
The PRS that is predicted is the PRS (= context C) after the prediction 
failure.

By the way:
I'm going to use BPTT (backpropagation through time) to train the recurrent 
neural network. According to Schmidhuber at IDSIA there is a better recurrent 
neural network system Long Short Term Memory (LSTM). This network + training 
technique is claimed to be able to look further back (>1000 steps in stead of 
just 10 for BPTT). Is anyone familiair with this? Can it be put in the same 
layering architecture that I am aiming at, i.e. does it have an abstraction 
state?

Bye,
Arnoud

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to