Re: [theano-users] Re: CNMeM is disabled, CuDNN not available

2017-03-05 Thread anishi gupta
Sir I think GPUArray will itself control GPU memory allocation, there is no need of cnmem now as it is depreciated whic is controlled via CUDA backend. On Mon, Mar 6, 2017 at 12:20 PM, anishi gupta wrote: > Hello Sir, > > When I used command "device=gpuN python file.py"

Re: [theano-users] Re: CNMeM is disabled, CuDNN not available

2017-03-05 Thread anishi gupta
Hello Sir, When I used command "device=gpuN python file.py" then code is successfullt running on gpu.As you said theano FLAGS will override .rc file settings so I think there is no need of making .theanorc file. Kindly tell me how to make cuda memory faster if I dont use cnmem...I want to gain

[theano-users] Re: Gradients are always 0 for custom loss function

2017-03-05 Thread taromakino
Thanks Jesse, so are there operations that are "safe" to use and others that aren't? Where can I find this information? Also, I've used T.eq before in another custom loss function which works correctly and doesn't return 0 gradients, but my use case there is in computing array indices, such as

[theano-users] Re: Gradients are always 0 for custom loss function

2017-03-05 Thread Jesse Livezey
The gradient of T.eq will be zero (almost) everywhere and you're using it to compute num_win and num_lose. On Sunday, March 5, 2017 at 2:42:14 PM UTC-8, tarom...@alum.northwestern.edu wrote: > > Also, the return values of this loss function are small compared to > cross-entropy, some sample

[theano-users] Re: how do I force a elemwise op included within tensor elemwise loop

2017-03-05 Thread Adam Becker
Nevermind, I figured it out. I had to disable constant folding and wrap with tensor.elemwise. On Sunday, March 5, 2017 at 10:28:39 AM UTC+8, Adam Becker wrote: > > Hi, > > I'm writing a elemwise Op with special purpose, it's c_code should be > different when it's working with tensor objects.