I am having a problem with the evaluation of a 0-1 classification network 
using a fixed batch-size.
The setup is the following: at training time I have a fixed batch-size of 
256 (#images) while at test time I have only 1 image (so I require a 
batch-size of 1).
My solution is to create a test-network that shares the parameters of the 
training network and has the fixed batch-size 1:

*lasagne.layers.InputLayer(shape=(1, 1, input_height, input_width, 
input_depth))*

Now for evaluation, if I evaluate the test-network it does not provide the 
exact same results as when I evaluate the training-network with my single 
test image on index 0, and all remaining elements from index 1 to 255 set 
to 0. The error is around 1e-5.

I know the solution of using None instead of a fixed batch-size. From what 
I experimented, using a fixed batch-size yields a much better 
time-performance.

Did anyone have any experience with this? Is it just a numerical problem?

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to