nevermind - I realized that I should not even have a custom Op for 
something that is already a theano computation graph - because then 
optimizations don't work anymore.
thanks for the help!

Am Mittwoch, 2. August 2017 16:31:57 UTC+2 schrieb Michael Osthege:

> Your suggestion put me on the right track and I'm halfway there.
> I actually have two Ops - one that's doing a perform() using Python 
> functions (I got this to work) and another that I can make purely from 
> theano functions:
>     def make_node(self, y0, theta, dt, n):
>         t_y0 = tt.as_tensor_variable(y0)
>         t_theta = tt.as_tensor_variable(theta)
>         t_dt = tt.as_tensor_variable(dt)
>         t_n = tt.as_tensor_variable(n)
>         Y_hat, updates = theano.scan(fn=self.dydt,
>                                     outputs_info =[{'initial':t_y0}],
>                                     sequences=[theano.tensor.arange(t_dt, 
> dt*(t_n+1), t_dt)],
>                                     non_sequences=[t_dt, t_theta],
>                                     n_steps=n-1)
>         
>         Y_hat = tt.concatenate((t_y0[None,:], Y_hat))    # scan does not 
> return y0...
>         Y_hat = tt.transpose(Y_hat)                      # return as 
> (len(y0),n)
>         
>         apply_node = theano.Apply(self, 
>                             [t_y0, t_theta],                    # symbolic 
> inputs: y0 and theta
>                             [Y_hat])                        # symbolic 
> outputs: Y_hat
>         return apply_node
>
> The computation graph leading to Y_hat is fine, but Apply() raises an 
> exception: "All output variables passed to Apply must belong to it." 
> because Y_hat.owner is already defined.
>
> From my understanding I should not use perform() here because the inputs 
> and outputs to the actual computation should not happend via numpy.
> Where is the right place to put this sub-graph in the Op implementation?
>
>
>
>
> Am Dienstag, 1. August 2017 19:25:34 UTC+2 schrieb nouiz:
>
>> Just don't use otypes, itypes. Implement the make_node() method. It isn't 
>> hard. We won't extend shortly as_op or otypes/itypes.
>>
>> If you have question about make_node(), you can ask them.
>>
>> Fred
>>
>> On Tue, Aug 1, 2017 at 12:33 PM 'Michael Osthege' via theano-users <
>> [email protected]> wrote:
>>
>>> sorry to dig up this old thread, but I am also working with pymc3 and 
>>>>> have a related problem:
>>>>>
>>>>
>>> I am trying to create custom Ops for integrating an ODE model. I can 
>>> already do it with as_op, but that can't be pickled leading to problems 
>>> with parallelization in pymc3.
>>>
>>> I followed the theano documentation to implement a custom Op, but I 
>>> noticed a problem with the otypes. The *Ops otypes is a list of 
>>> dvector, but the length of that list can change with the Op parameters*. 
>>> But the itypes/otypes are not instance-attributes but class attributes. So 
>>> theoretically I *can't have multiply custom Ops of the same type that 
>>> have different itypes/otypes*, right?
>>>
>>> Also, do you have any idea how I could circumvent this?
>>>
>>> cheers
>>>
>>> -- 
>>>
>>> --- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "theano-users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to [email protected].
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to