I replied on stack overflow

On Wed, Jan 25, 2017 at 3:25 PM, <[email protected]> wrote:

> Hi all,
>
> I followed the example of creating new operations and types in Python and
> I am trying to create a Type that allows differentiation (building upon the
> very basic "Double" type from the documentation).
>
> Links:
> - Creating a New Op: http://deeplearning.net/software/theano/extending/
> extending_theano.html
> - Making the Double Type: http://deeplearning.net/
> software/theano/extending/type.html
>
> From there, I currently have the following code:
>
> import theano
>
> class Double(theano.gof.Type):
>
>     def filter(self, value, strict = False, allow_downcast = None):
>         if strict:
>             # we need to return a type, but if the value is incompatible
> raise an exception
>             if isinstance(value, float):
>                 return value
>             else:
>                 raise TypeError('Expected a float!')
>         elif allow_downcast:
>             return float(value)
>         else:
>             value_float = float(value)
>             if value_float == value:
>                 return value_float
>             else:
>                 raise TypeError('The double type cannot be accurately
> represent %s of type %s' % (value, type(value)))
>
>     def values_eq_approx(self, value_a, value_b, tolerance = 1e-6):
>         return abs(value_a - value_b) / (abs(value_a) + abs(value_b)) <
> tolerance
>
> double = Double()
>
> class DoubleAddOp(theano.Op):
>
>     __props__ = ()
>
>     def make_node(self, x, y):
>         # check input types
>         if isinstance(x, (int, float)):
>             x = theano.gof.Constant(double, x)
>         if isinstance(y, (int, float)):
>             y = theano.gof.Constant(double, y)
>
>         if x.type != double or y.type != double:
>             raise TypeError('DoubleAddOp only works on doubles.')
>
>         return theano.gof.Apply(self, [x, y], [double()])
>
>     def perform(self, node, inputs, output_storage):
>         x = inputs[0]
>         y = inputs[1]
>         z = output_storage[0]
>         z[0] = x + y
>
>     def infer_shape(self, node, input_shapes):
>         return [input_shapes[0]]
>
>     def grad(self, inputs, output_grads):
>         return [output_grads[0]*1, output_grads[0]*1]
>
>     def __str__(self):
>         return 'DoubleAddOp'
>
> dadd = DoubleAddOp()
>
> Overall, I would like to be able to do something like:
>
> x = double('x')
> y = double('y')
> z = dadd(x, y)
> print(z.type)
> gx = theano.tensor.grad(z, x)
> gy = theano.tensor.grad(z, y)
> f = theano.function([x, y], [gx, gy])
>
> Are there any examples on defining new types allowing gradient computation
> or anyone willing to help?
>
> I also posted a related question on StackOverflow: http://
> stackoverflow.com/questions/41858327/how-to-define-custom-
> theano-types-allowing-differentiation
>
> Thanks!
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to