You can use T.switch

y_hat_clipped = T.switch(self.input2 > 1, 1, self.input2)
return T.mean((y - y_hat_clipped) ** 2)

Gradients will be calculated correctly. In this case, the gradient with 
respect to self.input2 will be zero if it is greater than 1.

On Thursday, April 27, 2017 at 12:46:22 PM UTC-7, Feras Almasri wrote:
>
>
> I'm building a convolution neural network and I'm using mean squared error 
> as a cost function. I'm changing the cost function to not have error when 
> the network output is over one so I'm thresholding the out to one when it 
> is bigger. using this code
>
>  def MSE2(self, y):
>
>         loc = np.where(y == 1)[0]
>         for i in range(len(loc)):
>             if self.input2[loc[i]] > 1:
>                 self.input2[loc[i]] = 1
>
>         return T.mean((y - self.input2) ** 2)
>
> I'd like to know if theano gradient function will take this into account 
> when it calculate the gradient or I should change something else.
>
> Beside this, Is there any other way I can optimize this code to run faster 
> or maybe on the GPU.
>
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to