I would like to add a constraint on the filters of my convolution operator, 
such that they'll always be symmetric and positive semi definite along the 
two trailing (spatial) axes.

There are many ways to achieve this, for instance taking the Gaussian 
Kernels, or building the symmetric Gram Matrices of the filters.

To ensure that this conditions is always met throughout the network 
training, I'm applying this transformation to the filters before every 
convolution.

The filters will then be modified between an update and a forward step ! I 
am not sure it's a good idea...



*My questions :*Will the network be able to learn if the filters are passed 
through this transformation at each forward step ?

Is there another way to constrain the filters of a layer on some condition?

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to