Re: Turning CUDNN on/off in anaconda distribution

2019-07-11 Thread Chris Olivier
the problem is that additional training of a pretrained model on a
relatively simple vision-type model (not a complex model like resnet, but
it has some convolutions), is converging on CPU but not GPU — validation
does not converge, anyway.

is there an anaconda or pip package without cudnn to try without a rebuild?
i don’t think rebuild is an option.

On Thu, Jul 11, 2019 at 10:22 AM kellen sunderland <
kellen.sunderl...@gmail.com> wrote:

> Having runtime loadable / plugable operators might help with this.
>
> On Thu, Jul 11, 2019 at 10:20 AM kellen sunderland <
> kellen.sunderl...@gmail.com> wrote:
>
> > Once it's compiled the forward / backward, etc kernel implementations are
> > hard coded to use cuDNN.  In theory we could support raw CUDA in addition
> > to cuDNN but the additional CUDA kernel code would bloat the binary (it
> > targets several GPU types).
> >
> > On Thu, Jul 11, 2019 at 9:36 AM Chris Olivier 
> > wrote:
> >
> >> Is there an environment variable or some other way to not use CUDNN in
> the
> >> anaconda distribution of mxnet?
> >>
> >
>


Re: Turning CUDNN on/off in anaconda distribution

2019-07-11 Thread kellen sunderland
Having runtime loadable / plugable operators might help with this.

On Thu, Jul 11, 2019 at 10:20 AM kellen sunderland <
kellen.sunderl...@gmail.com> wrote:

> Once it's compiled the forward / backward, etc kernel implementations are
> hard coded to use cuDNN.  In theory we could support raw CUDA in addition
> to cuDNN but the additional CUDA kernel code would bloat the binary (it
> targets several GPU types).
>
> On Thu, Jul 11, 2019 at 9:36 AM Chris Olivier 
> wrote:
>
>> Is there an environment variable or some other way to not use CUDNN in the
>> anaconda distribution of mxnet?
>>
>


Re: Turning CUDNN on/off in anaconda distribution

2019-07-11 Thread kellen sunderland
Once it's compiled the forward / backward, etc kernel implementations are
hard coded to use cuDNN.  In theory we could support raw CUDA in addition
to cuDNN but the additional CUDA kernel code would bloat the binary (it
targets several GPU types).

On Thu, Jul 11, 2019 at 9:36 AM Chris Olivier  wrote:

> Is there an environment variable or some other way to not use CUDNN in the
> anaconda distribution of mxnet?
>


Turning CUDNN on/off in anaconda distribution

2019-07-11 Thread Chris Olivier
Is there an environment variable or some other way to not use CUDNN in the
anaconda distribution of mxnet?