conv_out is the output of dnn.dnn_conv. I tried to add the bias to the 
w^T*x. But it reports me an error:



Running network...
Traceback (most recent call last):

  File "<ipython-input-8-b830fbb18105>", line 1, in <module>
    
runfile('/space/xzhang/git_cnn_conversion/MyLasagneCode_CIFAR10/test_convnet_binary_bias.py',
 
wdir='/space/xzhang/git_cnn_conversion/MyLasagneCode_CIFAR10')

  File 
"/space/xzhang/anaconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py",
 
line 866, in runfile
    execfile(filename, namespace)

  File 
"/space/xzhang/anaconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py",
 
line 94, in execfile
    builtins.execfile(filename, *where)

  File 
"/space/xzhang/git_cnn_conversion/MyLasagneCode_CIFAR10/test_convnet_binary_bias.py",
 
line 161, in <module>
    main(**kargs)

  File 
"/space/xzhang/git_cnn_conversion/MyLasagneCode_CIFAR10/test_convnet_binary_bias.py",
 
line 107, in main
    dt=dt, max_rate=1000, proc_fn=get_output,  reset_fn=final_dense)

  File "spike_tester_theano.py", line 128, in run_tester
    out_mem, t, Ntransmittedspikes, conv1_spikes, conv2_spikes, 
conv3_spikes = proc_fn(inp_images.astype('float32'), float(t))

  File 
"/space/xzhang/anaconda2/lib/python2.7/site-packages/theano/compile/function_module.py",
 
line 898, in __call__
    storage_map=getattr(self.fn, 'storage_map', None))

  File 
"/space/xzhang/anaconda2/lib/python2.7/site-packages/theano/gof/link.py", 
line 325, in raise_with_op
    reraise(exc_type, exc_value, exc_trace)

  File 
"/space/xzhang/anaconda2/lib/python2.7/site-packages/theano/compile/function_module.py",
 
line 884, in __call__
    self.fn() if output_subset is None else\


ValueError: GpuElemwise. Input dimension mis-match. Input 1 (indices start 
at 0) has shape[3] == 32, but the output's size on that axis is 16.
Apply node that caused the error: GpuElemwise{Add}[(0, 
0)]<gpuarray>(GpuSubtensor{::, ::, int64:int64:, int64:int64:}.0, 
InplaceGpuDimShuffle{x,x,x,0}.0)
Toposort index: 250
Inputs types: [GpuArrayType<None>(float32, 4D), GpuArrayType<None>(float32, 
(True, True, True, False))]
Inputs shapes: [(1, 32, 16, 16), (1, 1, 1, 32)]
Inputs strides: [(51200, 1600, 80, 4), (128, 128, 128, 4)]
Inputs values: ['not shown', 'not shown']
Outputs clients: [[HostFromGpu(gpuarray)(GpuElemwise{Add}[(0, 
0)]<gpuarray>.0)]]

HINT: Re-running with most Theano optimization disabled could give you a 
back-trace of when this node was created. This can be done with by setting 
the Theano flag 'optimizer=fast_compile'. If that does not work, Theano 
optimizations can be disabled with 'optimizer=None'.
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and 
storage map footprint of this apply node.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to