Hi Christos,
You can download the original version from here
<http://www.cs.nyu.edu/%7Eylclab/data/norb-v1.0/> and follow the
instruction to convert it to the pickle format here
<http://deeplearning.net/tutorial/gettingstarted.html>.
Best regards,
Vincenzo
Il 12/07/2016 09:38, Geppetto Null ha scritto:
Hi Vincenzo, could you please help me find the NORB dataset in
Theano/Lasagne format?
Thank you very much,
Best,
Christos
On Thursday, 11 June 2015 00:38:04 UTC+3, Vincenzo Lomonaco wrote:
Hello everyone,
I am trying to reproduce with Theano the results obtained on the
small NORB dataset and reported in the paper "Learning Methods for
Generic Object Recognition with Invariance to Pose and Lighting"
[ Huang, LeCun -
http://yann.lecun.com/exdb/publis/pdf/lecun-04.pdf
<http://yann.lecun.com/exdb/publis/pdf/lecun-04.pdf> ] using CNNs.
Starting from the LeNet tutorial [
http://deeplearning.net/tutorial/lenet.html
<http://deeplearning.net/tutorial/lenet.html> ] I changed the
model to fit what described in the paper but I can't get the error
rate below *8,7%* while in the paper is reported as *6,8%*.
Has anyone tried this before?
Here the model details:
|
nkerns=[8, 24]
layer0_input =x.reshape((batch_size,2,96,96))
layer0 =LeNetConvPoolLayer(
rng,
input=layer0_input,
image_shape=(batch_size,in_dim,img_dim,img_dim),
filter_shape=(nkerns[0],in_dim,5,5),
poolsize=|(4,4)|,
pool_type='max'
)
layer1 =LeNetConvPoolLayer(
rng,
input=layer0.output,
image_shape=(batch_size,nkerns[0],23,23),
filter_shape=(nkerns[1],nkerns[0],6,6),
poolsize=(3,3),
pool_type='max'
)
layer2_input =layer1.output.flatten(2)
layer2 =HiddenLayer(
rng,
input=layer2_input,
n_in=nkerns[1]*6 *6,
n_out=batch_size,
activation=T.tanh
)
layer3
=LogisticRegression(input=layer2.output,n_in=batch_size,n_out=5)
|
I've also tried sum and average pooling other than max, and
implemented dropout for the hidden layer but without great
improvements.
Do you think that the problem is the full-connected convolution
operation?
Does anyone has an example code to select input feature maps in a
convolutional layer?
Any suggestion?
--
---
You received this message because you are subscribed to a topic in the
Google Groups "theano-users" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/theano-users/vBfxxWQODNw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
[email protected]
<mailto:[email protected]>.
For more options, visit https://groups.google.com/d/optout.
--
Vincenzo Lomonaco, M.Sc.
PhD student @ University of Bologna
http://www.vincenzolomonaco.com/
Linkedin <http://it.linkedin.com/in/vincenzolomonaco/> Twitter
<https://twitter.com/v_lomonaco> Facebook
<https://www.facebook.com/vincenzo.lomonaco.91> Google+
<https://plus.google.com/u/0/+VincenzoLomonaco>
--
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.