You probably don't have the same problem. Start a new thread on the mailing
list with a good description of the problem and your error.

Otherwise I'm pretty sure nobody will help you.

Fred

Le 20 janv. 2017 03:44, "Dewesh Pardhi" <dewesh...@gmail.com> a écrit :

> Hi Pranav I am also facing same issue ,not able to compute the grad of one
> of the layer,can you pl help me out,you can reach out to  me  at
> dewesh...@gmail.com ,mo-+91 9096014001 <+91%2090960%2014001>
>
> On Sunday, August 28, 2016 at 4:54:15 PM UTC+5:30, pranav inani wrote:
>>
>>
>>
>> On Saturday, 27 August 2016 22:46:23 UTC+5:30, martin.de...@gmail.com
>> wrote:
>>>
>>> From what you say you have 4 parameters if I understand correctly
>>> [W,bW,b]?
>>>
>>
>> W,b and W,b are the weights and biases of the hidden layer and logistic
>> classifer respectively(the last two layers of a conv neural network). The
>> next six values (<TensorType(float64, 4D)>, <TensorType(float64, vector)>,
>> <TensorType(float64, 4D)>, <TensorType(float64, vector)>,
>> <TensorType(float64, 4D)>, <TensorType(float64, vector)>) are supposed to
>> be the weights and biases of the 3 ConvPool layers preceding the hidden
>> layer.
>>
>>
>>
>>> Are they referring to the same thing? if not consider renaming them
>>> W1,b1,W2,b2
>>>
>>> From your code you define your parameters to be float32 but here from
>>> what you say your params when you print them are float64.
>>> Why is that?
>>>
>>> Explicitly define every parameter that you use to be float32, also make
>>> sure your data are float32 for consistency.
>>>
>>
>> I have coded everything to be in floatX taht defaults to float 64. The
>> code that works (the one without the loop) works fine with float 64. I
>> haven't changed the dtype parameters anywhere and it works in the other
>> code. So I have concluded that it is not the problem.
>>
>> https://groups.google.com/forum/#!topic/theano-users/FQz7qn3C1qc
>>
>>
>>
>> above is a link to a similar problem. I have tried that as well.
>>
>> I tried feeding the params separately for each layer. Interestingly, the
>> grad function was able to calculate the gradients for the hidden layer,
>> logistic classifier and the 3rd ConvPool layer. But gave disconnected input
>> error for the first 2 ConvPool layers.
>>
>>  All this leads to one thing: the fault is in the loop.
>>
>> Any other ideas would be appreciated.
>>
>> Regards
>> Pranav
>>
>>
>>
>>
>>>
>>> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to