Having runtime loadable / plugable operators might help with this.

On Thu, Jul 11, 2019 at 10:20 AM kellen sunderland <
kellen.sunderl...@gmail.com> wrote:

> Once it's compiled the forward / backward, etc kernel implementations are
> hard coded to use cuDNN.  In theory we could support raw CUDA in addition
> to cuDNN but the additional CUDA kernel code would bloat the binary (it
> targets several GPU types).
>
> On Thu, Jul 11, 2019 at 9:36 AM Chris Olivier <cjolivie...@gmail.com>
> wrote:
>
>> Is there an environment variable or some other way to not use CUDNN in the
>> anaconda distribution of mxnet?
>>
>

Reply via email to