With device=gpu flag also add the flag lib.cnmem=1

This will speed up that code. The new backend so something similar by
default.

Fred

Le 14 nov. 2016 12:15, "Michael Klachko" <[email protected]> a
écrit :

> I will try testing it on Pascal Titan X card when I have time tomorrow,
> and will report back.
>
> On Sat, Nov 12, 2016 at 9:36 PM, Ragav Venkatesan <
> [email protected]> wrote:
>
>> When I was debugging, I also discovered that if I use ignore_border =
>> False for pooling, it doesn't run on GPU in libgpuarray backend.
>> ignore_border = True does. Is there anything to this ?
>>
>>
>> On Saturday, November 12, 2016 at 7:45:22 PM UTC-7, Ragav Venkatesan
>> wrote:
>>>
>>> in htop I usually have one CPU running 100% for both cases.
>>>
>>>
>>> On Saturday, November 12, 2016 at 7:43:16 PM UTC-7, Michael Klachko
>>> wrote:
>>>>
>>>> I'm not sure, but just by looking at CPU usage (top command on Linux)
>>>> you should be able to see the difference.
>>>>
>>>> On Sat, Nov 12, 2016 at 6:19 PM, Ragav Venkatesan <
>>>> [email protected]> wrote:
>>>>
>>>>> Both are using CUdNNs.. I am wondering if some ops are running on the
>>>>> CPU, how do I find that out ?
>>>>>
>>>>> On Friday, November 11, 2016 at 10:00:39 PM UTC-7, Michael Klachko
>>>>> wrote:
>>>>>>
>>>>>> Do both versions use CuDNN? If gpu0 version didn't use it, that would
>>>>>> explain the difference. Also, look at CPU usage for gpu0 version - it 
>>>>>> could
>>>>>> be that some ops are running on CPU instead of GPU.
>>>>>>
>>>>>> On Fri, Nov 11, 2016 at 2:20 PM, Ragav Venkatesan <
>>>>>> [email protected]> wrote:
>>>>>>
>>>>>>> Running on GTX 1080, cuda0 for device runs for 1.69 minutes at 98% ,
>>>>>>> gpu0 runs for 5.12 minutes at 34% . Both runs the same code cnn_tutorial
>>>>>>> from theano tutorials. The code is not modified or changed at all.
>>>>>>> floatX=float32, mode = FAST_RUN, nvcc.fastmath = True and nvcc.allowgc
>>>>>>> =True.
>>>>>>>
>>>>>>> On Thursday, November 10, 2016 at 4:47:38 PM UTC-7, Michael Klachko
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> Yes. It depends on the size of your network/input - the smaller it
>>>>>>>> is, the harder it is to keep 3k cores busy all the time.
>>>>>>>> Regarding timing, you don't need to write much code:
>>>>>>>>
>>>>>>>> import time
>>>>>>>> start_time = time.time()
>>>>>>>> your code here
>>>>>>>> print "Code ran for {:.1f} minutes".format((time.time() -
>>>>>>>> start_time)/60)
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Thu, Nov 10, 2016 at 3:26 PM, Ragav Venkatesan <
>>>>>>>> [email protected]> wrote:
>>>>>>>>
>>>>>>>>> I'm writing a code to test this, but why do you ask this ? Is
>>>>>>>>> there a case where nvidia-smi might give me 35% util when the GPU is
>>>>>>>>> actually running the code as fast as it can ?
>>>>>>>>>
>>>>>>>>> On Wednesday, November 9, 2016 at 5:36:14 PM UTC-7, Michael
>>>>>>>>> Klachko wrote:
>>>>>>>>>>
>>>>>>>>>> Ragav, so when GPU is 98% utilized, is the training faster than
>>>>>>>>>> when it's 35% utilized? Have you timed it?
>>>>>>>>>>
>>>>>>>>>> On Wed, Nov 9, 2016 at 4:09 PM, Ragav Venkatesan <
>>>>>>>>>> [email protected]> wrote:
>>>>>>>>>>
>>>>>>>>>>> After investigating further I don't think this is a speed or
>>>>>>>>>>> slow issue. I think the newer version of CUDA/cuDNN using the cuda 
>>>>>>>>>>> backend
>>>>>>>>>>> is not using the GPU fully. The older version (7.5/5103) of 
>>>>>>>>>>> CUDA/cuDNN
>>>>>>>>>>> produce 98% GPU util but the same code on the latest versions 
>>>>>>>>>>> (8.0/5105)
>>>>>>>>>>> don't. The code by the way is the lenet tutorial from theano, so 
>>>>>>>>>>> its not
>>>>>>>>>>> some weird coding error also. Using the libgpuarray backend, I am 
>>>>>>>>>>> able to
>>>>>>>>>>> produce 98% util even with CUDA/cuDNN (8/5105).
>>>>>>>>>>>
>>>>>>>>>>> On Wednesday, November 9, 2016 at 9:48:40 AM UTC-7, nouiz wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> It could be that the new back-end (libgpuarray) is faster and
>>>>>>>>>>>> more efficient in that cases. So just use that back-end :)
>>>>>>>>>>>>
>>>>>>>>>>>> The speed difference between both back-end isn't constant, but
>>>>>>>>>>>> should be a little bit faster with the new back-end in average.
>>>>>>>>>>>>
>>>>>>>>>>>> We have found a few speed regression in the new back-end, but
>>>>>>>>>>>> they where fixed. If you found one, just tell us and we'll fix it. 
>>>>>>>>>>>> But the
>>>>>>>>>>>> probably is still low of having slowdown in the new back-end.
>>>>>>>>>>>>
>>>>>>>>>>>> We just merged one such fix with indexing. Make sure to update
>>>>>>>>>>>> libgpuarray and recompile it if you want to be sure to have the 
>>>>>>>>>>>> fastest
>>>>>>>>>>>> version.
>>>>>>>>>>>>
>>>>>>>>>>>> Fred
>>>>>>>>>>>>
>>>>>>>>>>>> On Tue, Nov 8, 2016 at 1:56 PM, Ragav Venkatesan <
>>>>>>>>>>>> [email protected]> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Ok, here is a problem I'm getting and I am not sure how to
>>>>>>>>>>>>> solve this. If I use the libgpuarray backend on the cnn_tutorial 
>>>>>>>>>>>>> I am
>>>>>>>>>>>>> getting a 98% gpu tutilization with cudnn 5105. If I use cuda 
>>>>>>>>>>>>> backend, I am
>>>>>>>>>>>>> only getting about 35% utilization.
>>>>>>>>>>>>> Anyidea why this might be so ?
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Monday, October 24, 2016 at 9:38:17 AM UTC-7, nouiz wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> What errors do you have? Delete your Theano cache, just in
>>>>>>>>>>>>>> case and be sure to use Theano dev version. The last release 
>>>>>>>>>>>>>> don't support
>>>>>>>>>>>>>> it I think.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Fred
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Oct 24, 2016 at 12:33 PM, Michael Klachko <
>>>>>>>>>>>>>> [email protected]> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Yes, it's supported, I'm using it right now (CUDA 8.0 on
>>>>>>>>>>>>>>> Ubuntu 14.04):
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> >>> import theano
>>>>>>>>>>>>>>> Using gpu device 0: TITAN X (Pascal) (CNMeM is enabled with
>>>>>>>>>>>>>>> initial size: 30.0% of memory, cuDNN 5105)
>>>>>>>>>>>>>>> >>> print theano.__version__
>>>>>>>>>>>>>>> 0.9.0dev3.dev-20fd30a38d34687e9d944140042762ca9fca6276
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Saturday, October 22, 2016 at 2:54:00 PM UTC-7, Ragav
>>>>>>>>>>>>>>> Venkatesan wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I updated and I'm getting some weird errors. With Cuda
>>>>>>>>>>>>>>>> backend, convolutions only run on CPU and with libgpuarray 
>>>>>>>>>>>>>>>> backend GPUs
>>>>>>>>>>>>>>>> only run at about 35% util.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> ---
>>>>>>>>>>>>>>> You received this message because you are subscribed to the
>>>>>>>>>>>>>>> Google Groups "theano-users" group.
>>>>>>>>>>>>>>> To unsubscribe from this group and stop receiving emails
>>>>>>>>>>>>>>> from it, send an email to [email protected].
>>>>>>>>>>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>
>>>>>>>>>>>>> ---
>>>>>>>>>>>>> You received this message because you are subscribed to the
>>>>>>>>>>>>> Google Groups "theano-users" group.
>>>>>>>>>>>>> To unsubscribe from this group and stop receiving emails from
>>>>>>>>>>>>> it, send an email to [email protected].
>>>>>>>>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>
>>>>>>>>>>> ---
>>>>>>>>>>> You received this message because you are subscribed to a topic
>>>>>>>>>>> in the Google Groups "theano-users" group.
>>>>>>>>>>> To unsubscribe from this topic, visit
>>>>>>>>>>> https://groups.google.com/d/topic/theano-users/bSTnP3yLorw/u
>>>>>>>>>>> nsubscribe.
>>>>>>>>>>> To unsubscribe from this group and all its topics, send an email
>>>>>>>>>>> to [email protected].
>>>>>>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>
>>>>>>>>> ---
>>>>>>>>> You received this message because you are subscribed to a topic in
>>>>>>>>> the Google Groups "theano-users" group.
>>>>>>>>> To unsubscribe from this topic, visit
>>>>>>>>> https://groups.google.com/d/topic/theano-users/bSTnP3yLorw/u
>>>>>>>>> nsubscribe.
>>>>>>>>> To unsubscribe from this group and all its topics, send an email
>>>>>>>>> to [email protected].
>>>>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>
>>>>>>> ---
>>>>>>> You received this message because you are subscribed to a topic in
>>>>>>> the Google Groups "theano-users" group.
>>>>>>> To unsubscribe from this topic, visit https://groups.google.com/d/to
>>>>>>> pic/theano-users/bSTnP3yLorw/unsubscribe.
>>>>>>> To unsubscribe from this group and all its topics, send an email to
>>>>>>> [email protected].
>>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>>
>>>>>>
>>>>>> --
>>>>>
>>>>> ---
>>>>> You received this message because you are subscribed to a topic in the
>>>>> Google Groups "theano-users" group.
>>>>> To unsubscribe from this topic, visit https://groups.google.com/d/to
>>>>> pic/theano-users/bSTnP3yLorw/unsubscribe.
>>>>> To unsubscribe from this group and all its topics, send an email to
>>>>> [email protected].
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>>
>>>> --
>>
>> ---
>> You received this message because you are subscribed to a topic in the
>> Google Groups "theano-users" group.
>> To unsubscribe from this topic, visit https://groups.google.com/d/to
>> pic/theano-users/bSTnP3yLorw/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> [email protected].
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to