> Should our code do this differently?
No, that warning can be ignored. We have some things to update internally
that will make that go away.
See https://github.com/pymc-devs/aesara/issues/338.
On Thursday, May 6, 2021 at 8:18:25 PM UTC-5 [email protected] wrote:
> I'm finding the switch from Theano to Aesara straightforward, our
> simulation code produced the same output, and one program runs 19% faster
> once there's an ~/.aesara cache!
>
> This is fantastic. I was getting worried about depending on Theano.
>
>
> It did print two kinds of warnings:
>
> RuntimeWarning: divide by zero encountered in true_divide
>
> .../lib/python3.8/site-packages/aesara/graph/fg.py:525: UserWarning:
> Variable Elemwise{mul}.0 cannot be replaced; it isn't in the FunctionGraph
>
>
> The first warning is from our code calling NumPy. It turns out that `import
> theano` suppresses that warning globally!
>
> It's great that Aesara doesn't suppress that warning, but it could've
> saved debugging time if the release notes mentioned that. :)
>
>
> The second kind of warning comes from deep in Aesara, in code that used to
> "just return silently because it makes it easier to implement some
> optimizations for multiple-output ops".
>
> Here's the stack trace (OCR'd from a screen shot of the PyCharm debugger's
> "frames" pane):
>
> replace, fg.py:525
> replace_all_validate, toolbox.py:574
> replace_all_validate_remove, toolbox.py:642
> process_node, opt.py:2058
> apply, opt.py:2535
> optimize, opt.py:84
> apply, opt.py:246
> optimize, opt.py:84
> __call__, opt.py:93
> __init__, types.py:1616
> orig_function, types.py:1971
> pfunc, pfunc.py:524
> function, __init__.py:337
> make_thunk, op.py:970
> make_all, vm.py:1133
> make_thunk, basic.py:293
> create, types.py:1837
> orig_function, types.py:1982
> pfunc, pfunc.py:524
> function, __init__.py:337
> km_loss_function, rna_decay.py:179
> ...
>
> km_loss_function is in our code:
>
> import aesara.tensor as T
> from aesara import function, gradient
>
> km = T.dvector()
> ...
>
> J = gradient.jacobian(LossFunction, km)
> J_aux = gradient.jacobian(LossFunction_aux, km)
> Jacob = function([km], J)
>
> Should our code do this differently?
> Do you need a small test case?
>
--
---
You received this message because you are subscribed to the Google Groups
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/theano-users/0d9c6114-5b49-4166-b00d-2d156eb01df6n%40googlegroups.com.