[theano-users] Error when try to do w^T*x+b

2017-07-07 Thread zxzhijia
conv_out is the output of dnn.dnn_conv. I tried to add the bias to the 
w^T*x. But it reports me an error:



Running network...
Traceback (most recent call last):

  File "", line 1, in 

runfile('/space/xzhang/git_cnn_conversion/MyLasagneCode_CIFAR10/test_convnet_binary_bias.py',
 
wdir='/space/xzhang/git_cnn_conversion/MyLasagneCode_CIFAR10')

  File 
"/space/xzhang/anaconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py",
 
line 866, in runfile
execfile(filename, namespace)

  File 
"/space/xzhang/anaconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py",
 
line 94, in execfile
builtins.execfile(filename, *where)

  File 
"/space/xzhang/git_cnn_conversion/MyLasagneCode_CIFAR10/test_convnet_binary_bias.py",
 
line 161, in 
main(**kargs)

  File 
"/space/xzhang/git_cnn_conversion/MyLasagneCode_CIFAR10/test_convnet_binary_bias.py",
 
line 107, in main
dt=dt, max_rate=1000, proc_fn=get_output,  reset_fn=final_dense)

  File "spike_tester_theano.py", line 128, in run_tester
out_mem, t, Ntransmittedspikes, conv1_spikes, conv2_spikes, 
conv3_spikes = proc_fn(inp_images.astype('float32'), float(t))

  File 
"/space/xzhang/anaconda2/lib/python2.7/site-packages/theano/compile/function_module.py",
 
line 898, in __call__
storage_map=getattr(self.fn, 'storage_map', None))

  File 
"/space/xzhang/anaconda2/lib/python2.7/site-packages/theano/gof/link.py", 
line 325, in raise_with_op
reraise(exc_type, exc_value, exc_trace)

  File 
"/space/xzhang/anaconda2/lib/python2.7/site-packages/theano/compile/function_module.py",
 
line 884, in __call__
self.fn() if output_subset is None else\


ValueError: GpuElemwise. Input dimension mis-match. Input 1 (indices start 
at 0) has shape[3] == 32, but the output's size on that axis is 16.
Apply node that caused the error: GpuElemwise{Add}[(0, 
0)](GpuSubtensor{::, ::, int64:int64:, int64:int64:}.0, 
InplaceGpuDimShuffle{x,x,x,0}.0)
Toposort index: 250
Inputs types: [GpuArrayType(float32, 4D), GpuArrayType(float32, 
(True, True, True, False))]
Inputs shapes: [(1, 32, 16, 16), (1, 1, 1, 32)]
Inputs strides: [(51200, 1600, 80, 4), (128, 128, 128, 4)]
Inputs values: ['not shown', 'not shown']
Outputs clients: [[HostFromGpu(gpuarray)(GpuElemwise{Add}[(0, 
0)].0)]]

HINT: Re-running with most Theano optimization disabled could give you a 
back-trace of when this node was created. This can be done with by setting 
the Theano flag 'optimizer=fast_compile'. If that does not work, Theano 
optimizations can be disabled with 'optimizer=None'.
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and 
storage map footprint of this apply node.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: How create activation function from scratch in python

2017-07-07 Thread Jesse Livezey
Here you're treating val like it is a symbolic theano variable
T.log(val)

But here you're treating it like a numpy array and passing it into a 
compiled theano function
return f_switch(val, 0, val, val)

Maybe you're intending to just return the function f_switch and then call 
it with values?


On Wednesday, July 5, 2017 at 3:02:58 PM UTC-7, nouiz wrote:
>
> Give the full error message. Without our I can't help.
>
> Fred
>
> Le mer. 5 juil. 2017 12:33, Bruno Messias  > a écrit :
>
>> I'  need call "custom" function with a given variable  x, such that
>>
>> type(x)
>>
>>
>> On Wednesday, July 5, 2017 at 12:53:22 PM UTC-3, Bruno Messias wrote:
>>>
>>> For didactic reasons, I am trying to implement a  "activation"  function
>>>
>>>
>>> a, x, y = T.matrices("a", 'x','y')
>>> b = T.scalars("b")
>>> def  custom(val):
>>>
>>> T.log(val)
>>> 
>>> 
>>> z_switch = T.switch(T.gt(a,b), T.true_div(T.add(T.pow(x, qEff),0), 
>>> 2), T.log(y))
>>>
>>> f_switch = theano.function([a, b, x, y], z_switch,
>>>mode=theano.Mode(linker='vm'))
>>> return f_switch(val, 0, val, val)
>>>
>>> Then I get the following error
>>>
>>> Expected an array-like object, but found a Variable: maybe you are trying 
>>> to call a function on a (possibly shared) variable instead of a numeric 
>>> array?
>>>
>>> Repeating> this is only for didactic purposes. There are any good tutorial 
>>> about this?
>>>
>>> -- 
>>
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "theano-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to theano-users...@googlegroups.com .
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: How to delete theano model from GPU before initiating another model

2017-07-07 Thread Feras Almasri
I checked theano documentation and it says this 

You can give None for any element of the list to specify that this element 
is not known at compile time. 
http://deeplearning.net/software/theano/library/tensor/nnet/conv.html

As I said I don't want to use any upper level on top of theano I'm just 
using pure theano. I think in my case it is working well because I'm using 
a convent that broadcast the same image size all over the network since 
there is no down sampling there is no need to re compute the image batch 
size in each different batch image size.  But I don't realy now why theano 
need to compute the image size before ceating the model while by using 
keras or lasagne it could work. 

Anyway my problem is solved here and it is not necessary to recompile the 
model, but I guess in different cases this could be important specially of 
the network is used in live run. another very important proposal is to add 
separated convolution network to theano framework. 

On Thursday, July 6, 2017 at 12:03:01 AM UTC+2, nouiz wrote:
>
> Pure Theano Do not expect shapes. By default shapes can changes. You just 
> need to be consistent in the computation you do on the shapes.
>
> If you set the batchsize shape to None, you are not using pure Theano.
>
> Do you use lasagne? Keras?
>
> Can you show the code where you set the shape to None?
>
> Fred
>
> Le mer. 5 juil. 2017 08:01, Feras Almasri  
> a écrit :
>
>> I found that it is possible to change the batch size during the run time 
>> by defining the batch size to None. But the pooling layer in case of 
>> averaging or same size doesn't have this option and should be defined in 
>> different way. 
>>
>>
>> On Tuesday, July 4, 2017 at 11:21:04 PM UTC+2, Feras Almasri wrote:
>>>
>>> I'm re-initiating another model in a loop because I'm testing different 
>>> batch sizes so I have to re initiate the model again. it seems in my code 
>>> that every time I'm re initiating the model the old model still in the GPU 
>>> and not deleted. is there any way to delete the model before initiating the 
>>> second ? 
>>>
>> -- 
>>
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "theano-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to theano-users...@googlegroups.com .
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.