the problem is that additional training of a pretrained model on a
relatively simple vision-type model (not a complex model like resnet, but
it has some convolutions), is converging on CPU but not GPU — validation
does not converge, anyway.
is there an anaconda or pip package without cudnn to try
Having runtime loadable / plugable operators might help with this.
On Thu, Jul 11, 2019 at 10:20 AM kellen sunderland <
kellen.sunderl...@gmail.com> wrote:
> Once it's compiled the forward / backward, etc kernel implementations are
> hard coded to use cuDNN. In theory we could support raw CUDA
Once it's compiled the forward / backward, etc kernel implementations are
hard coded to use cuDNN. In theory we could support raw CUDA in addition
to cuDNN but the additional CUDA kernel code would bloat the binary (it
targets several GPU types).
On Thu, Jul 11, 2019 at 9:36 AM Chris Olivier
Is there an environment variable or some other way to not use CUDNN in the
anaconda distribution of mxnet?