Re: [theano-users] Re: Is CuDnn 5105 supported ?

2016-11-09 Thread Ragav Venkatesan
Good question. No I haven't. I will do this by tonight if I can.

On Wednesday, November 9, 2016, Michael Klachko 
wrote:

> Ragav, so when GPU is 98% utilized, is the training faster than when it's
> 35% utilized? Have you timed it?
>
> On Wed, Nov 9, 2016 at 4:09 PM, Ragav Venkatesan <
> ragav.venkate...@gmail.com
> > wrote:
>
>> After investigating further I don't think this is a speed or slow issue.
>> I think the newer version of CUDA/cuDNN using the cuda backend is not using
>> the GPU fully. The older version (7.5/5103) of CUDA/cuDNN produce 98% GPU
>> util but the same code on the latest versions (8.0/5105) don't. The code by
>> the way is the lenet tutorial from theano, so its not some weird coding
>> error also. Using the libgpuarray backend, I am able to produce 98% util
>> even with CUDA/cuDNN (8/5105).
>>
>> On Wednesday, November 9, 2016 at 9:48:40 AM UTC-7, nouiz wrote:
>>>
>>> It could be that the new back-end (libgpuarray) is faster and more
>>> efficient in that cases. So just use that back-end :)
>>>
>>> The speed difference between both back-end isn't constant, but should be
>>> a little bit faster with the new back-end in average.
>>>
>>> We have found a few speed regression in the new back-end, but they where
>>> fixed. If you found one, just tell us and we'll fix it. But the probably is
>>> still low of having slowdown in the new back-end.
>>>
>>> We just merged one such fix with indexing. Make sure to update
>>> libgpuarray and recompile it if you want to be sure to have the fastest
>>> version.
>>>
>>> Fred
>>>
>>> On Tue, Nov 8, 2016 at 1:56 PM, Ragav Venkatesan 
>>> wrote:
>>>
 Ok, here is a problem I'm getting and I am not sure how to solve this.
 If I use the libgpuarray backend on the cnn_tutorial I am getting a 98% gpu
 tutilization with cudnn 5105. If I use cuda backend, I am only getting
 about 35% utilization.
 Anyidea why this might be so ?

 On Monday, October 24, 2016 at 9:38:17 AM UTC-7, nouiz wrote:
>
> What errors do you have? Delete your Theano cache, just in case and be
> sure to use Theano dev version. The last release don't support it I think.
>
> Fred
>
> On Mon, Oct 24, 2016 at 12:33 PM, Michael Klachko <
> michael...@gmail.com> wrote:
>
>> Yes, it's supported, I'm using it right now (CUDA 8.0 on Ubuntu
>> 14.04):
>>
>> >>> import theano
>> Using gpu device 0: TITAN X (Pascal) (CNMeM is enabled with initial
>> size: 30.0% of memory, cuDNN 5105)
>> >>> print theano.__version__
>> 0.9.0dev3.dev-20fd30a38d34687e9d944140042762ca9fca6276
>>
>>
>>
>>
>>
>> On Saturday, October 22, 2016 at 2:54:00 PM UTC-7, Ragav Venkatesan
>> wrote:
>>>
>>> I updated and I'm getting some weird errors. With Cuda backend,
>>> convolutions only run on CPU and with libgpuarray backend GPUs only run 
>>> at
>>> about 35% util.
>>>
>>>
>>> --
>>
>> ---
>> You received this message because you are subscribed to the Google
>> Groups "theano-users" group.
>> To unsubscribe from this group and stop receiving emails from it,
>> send an email to theano-users...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --

 ---
 You received this message because you are subscribed to the Google
 Groups "theano-users" group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to theano-users...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

>>>
>>> --
>>
>> ---
>> You received this message because you are subscribed to a topic in the
>> Google Groups "theano-users" group.
>> To unsubscribe from this topic, visit https://groups.google.com/d/to
>> pic/theano-users/bSTnP3yLorw/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> theano-users+unsubscr...@googlegroups.com
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
>
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "theano-users" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/theano-users/bSTnP3yLorw/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> theano-users+unsubscr...@googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>


-- 
Ragav

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: Is CuDnn 5105 supported ?

2016-11-09 Thread Michael Klachko
Ragav, so when GPU is 98% utilized, is the training faster than when it's
35% utilized? Have you timed it?

On Wed, Nov 9, 2016 at 4:09 PM, Ragav Venkatesan  wrote:

> After investigating further I don't think this is a speed or slow issue. I
> think the newer version of CUDA/cuDNN using the cuda backend is not using
> the GPU fully. The older version (7.5/5103) of CUDA/cuDNN produce 98% GPU
> util but the same code on the latest versions (8.0/5105) don't. The code by
> the way is the lenet tutorial from theano, so its not some weird coding
> error also. Using the libgpuarray backend, I am able to produce 98% util
> even with CUDA/cuDNN (8/5105).
>
> On Wednesday, November 9, 2016 at 9:48:40 AM UTC-7, nouiz wrote:
>>
>> It could be that the new back-end (libgpuarray) is faster and more
>> efficient in that cases. So just use that back-end :)
>>
>> The speed difference between both back-end isn't constant, but should be
>> a little bit faster with the new back-end in average.
>>
>> We have found a few speed regression in the new back-end, but they where
>> fixed. If you found one, just tell us and we'll fix it. But the probably is
>> still low of having slowdown in the new back-end.
>>
>> We just merged one such fix with indexing. Make sure to update
>> libgpuarray and recompile it if you want to be sure to have the fastest
>> version.
>>
>> Fred
>>
>> On Tue, Nov 8, 2016 at 1:56 PM, Ragav Venkatesan 
>> wrote:
>>
>>> Ok, here is a problem I'm getting and I am not sure how to solve this.
>>> If I use the libgpuarray backend on the cnn_tutorial I am getting a 98% gpu
>>> tutilization with cudnn 5105. If I use cuda backend, I am only getting
>>> about 35% utilization.
>>> Anyidea why this might be so ?
>>>
>>> On Monday, October 24, 2016 at 9:38:17 AM UTC-7, nouiz wrote:

 What errors do you have? Delete your Theano cache, just in case and be
 sure to use Theano dev version. The last release don't support it I think.

 Fred

 On Mon, Oct 24, 2016 at 12:33 PM, Michael Klachko >>> > wrote:

> Yes, it's supported, I'm using it right now (CUDA 8.0 on Ubuntu 14.04):
>
> >>> import theano
> Using gpu device 0: TITAN X (Pascal) (CNMeM is enabled with initial
> size: 30.0% of memory, cuDNN 5105)
> >>> print theano.__version__
> 0.9.0dev3.dev-20fd30a38d34687e9d944140042762ca9fca6276
>
>
>
>
>
> On Saturday, October 22, 2016 at 2:54:00 PM UTC-7, Ragav Venkatesan
> wrote:
>>
>> I updated and I'm getting some weird errors. With Cuda backend,
>> convolutions only run on CPU and with libgpuarray backend GPUs only run 
>> at
>> about 35% util.
>>
>>
>> --
>
> ---
> You received this message because you are subscribed to the Google
> Groups "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to theano-users...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

 --
>>>
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "theano-users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to theano-users...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>> --
>
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "theano-users" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/theano-users/bSTnP3yLorw/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: Is CuDnn 5105 supported ?

2016-11-09 Thread Ragav Venkatesan
After investigating further I don't think this is a speed or slow issue. I 
think the newer version of CUDA/cuDNN using the cuda backend is not using 
the GPU fully. The older version (7.5/5103) of CUDA/cuDNN produce 98% GPU 
util but the same code on the latest versions (8.0/5105) don't. The code by 
the way is the lenet tutorial from theano, so its not some weird coding 
error also. Using the libgpuarray backend, I am able to produce 98% util 
even with CUDA/cuDNN (8/5105).

On Wednesday, November 9, 2016 at 9:48:40 AM UTC-7, nouiz wrote:
>
> It could be that the new back-end (libgpuarray) is faster and more 
> efficient in that cases. So just use that back-end :)
>
> The speed difference between both back-end isn't constant, but should be a 
> little bit faster with the new back-end in average.
>
> We have found a few speed regression in the new back-end, but they where 
> fixed. If you found one, just tell us and we'll fix it. But the probably is 
> still low of having slowdown in the new back-end.
>
> We just merged one such fix with indexing. Make sure to update libgpuarray 
> and recompile it if you want to be sure to have the fastest version.
>
> Fred
>
> On Tue, Nov 8, 2016 at 1:56 PM, Ragav Venkatesan  > wrote:
>
>> Ok, here is a problem I'm getting and I am not sure how to solve this. If 
>> I use the libgpuarray backend on the cnn_tutorial I am getting a 98% gpu 
>> tutilization with cudnn 5105. If I use cuda backend, I am only getting 
>> about 35% utilization. 
>> Anyidea why this might be so ?
>>
>> On Monday, October 24, 2016 at 9:38:17 AM UTC-7, nouiz wrote:
>>>
>>> What errors do you have? Delete your Theano cache, just in case and be 
>>> sure to use Theano dev version. The last release don't support it I think.
>>>
>>> Fred
>>>
>>> On Mon, Oct 24, 2016 at 12:33 PM, Michael Klachko  
>>> wrote:
>>>
 Yes, it's supported, I'm using it right now (CUDA 8.0 on Ubuntu 14.04):

 >>> import theano
 Using gpu device 0: TITAN X (Pascal) (CNMeM is enabled with initial 
 size: 30.0% of memory, cuDNN 5105)
 >>> print theano.__version__
 0.9.0dev3.dev-20fd30a38d34687e9d944140042762ca9fca6276





 On Saturday, October 22, 2016 at 2:54:00 PM UTC-7, Ragav Venkatesan 
 wrote:
>
> I updated and I'm getting some weird errors. With Cuda backend, 
> convolutions only run on CPU and with libgpuarray backend GPUs only run 
> at 
> about 35% util. 
>
>
> -- 

 --- 
 You received this message because you are subscribed to the Google 
 Groups "theano-users" group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to theano-users...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

>>>
>>> -- 
>>
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "theano-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to theano-users...@googlegroups.com .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: Is CuDnn 5105 supported ?

2016-11-09 Thread Frédéric Bastien
If you use the flag

device=gpu*

you use the old back-end.

If you use the flag:

device=cuda*

you use the new back-end(libgpuarray).

If the new back-end work for you, use it. If not, tell us! But we are
pretty confident that it should work. We passed practically all operation
to it now. The one missing are very not frequently used, so you probably
won't miss them.

Here is a link to help you convert to the new back-end. It have some
information that should help you:

https://github.com/Theano/Theano/wiki/Converting-to-the-new-gpu-back-end%28gpuarray%29

On Wed, Nov 9, 2016 at 2:51 PM, Michael Klachko 
wrote:

> I'm a little confused regarding the two backends. Is there any
> recommendation to when we should be using one or the other? I installed
> both on my Ubuntu machine, and I don't even know which one I'm using for my
> convnets. How can I tell? Or how can I configure it to use one or another?
>
> On Wed, Nov 9, 2016 at 8:48 AM, Frédéric Bastien <
> frederic.bast...@gmail.com> wrote:
>
>> It could be that the new back-end (libgpuarray) is faster and more
>> efficient in that cases. So just use that back-end :)
>>
>> The speed difference between both back-end isn't constant, but should be
>> a little bit faster with the new back-end in average.
>>
>> We have found a few speed regression in the new back-end, but they where
>> fixed. If you found one, just tell us and we'll fix it. But the probably is
>> still low of having slowdown in the new back-end.
>>
>> We just merged one such fix with indexing. Make sure to update
>> libgpuarray and recompile it if you want to be sure to have the fastest
>> version.
>>
>> Fred
>>
>> On Tue, Nov 8, 2016 at 1:56 PM, Ragav Venkatesan <
>> ragav.venkate...@gmail.com> wrote:
>>
>>> Ok, here is a problem I'm getting and I am not sure how to solve this.
>>> If I use the libgpuarray backend on the cnn_tutorial I am getting a 98% gpu
>>> tutilization with cudnn 5105. If I use cuda backend, I am only getting
>>> about 35% utilization.
>>> Anyidea why this might be so ?
>>>
>>> On Monday, October 24, 2016 at 9:38:17 AM UTC-7, nouiz wrote:

 What errors do you have? Delete your Theano cache, just in case and be
 sure to use Theano dev version. The last release don't support it I think.

 Fred

 On Mon, Oct 24, 2016 at 12:33 PM, Michael Klachko >>> > wrote:

> Yes, it's supported, I'm using it right now (CUDA 8.0 on Ubuntu 14.04):
>
> >>> import theano
> Using gpu device 0: TITAN X (Pascal) (CNMeM is enabled with initial
> size: 30.0% of memory, cuDNN 5105)
> >>> print theano.__version__
> 0.9.0dev3.dev-20fd30a38d34687e9d944140042762ca9fca6276
>
>
>
>
>
> On Saturday, October 22, 2016 at 2:54:00 PM UTC-7, Ragav Venkatesan
> wrote:
>>
>> I updated and I'm getting some weird errors. With Cuda backend,
>> convolutions only run on CPU and with libgpuarray backend GPUs only run 
>> at
>> about 35% util.
>>
>>
>> --
>
> ---
> You received this message because you are subscribed to the Google
> Groups "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to theano-users...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

 --
>>>
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "theano-users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to theano-users+unsubscr...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>> --
>>
>> ---
>> You received this message because you are subscribed to a topic in the
>> Google Groups "theano-users" group.
>> To unsubscribe from this topic, visit https://groups.google.com/d/to
>> pic/theano-users/bSTnP3yLorw/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> theano-users+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] How to reproduce training results?

2016-11-09 Thread Michael Klachko
Andre, can you please post your theano config and your convnet parameters?
To debug this I would first try running on CPU only, then MLP, then convnet
with no pooling and tanh, with CuDNN disabled, then partially enable one
thing at a time.
Also, check the obvious things like using the same initial weights, and
disabling dropout.

On Tue, Nov 8, 2016 at 1:57 PM, André Lopes 
wrote:

> Even with the settings Michael posted, i still couldnt get deterministic
> results...
> Was there any updates for this issue since March?
>
> --
>
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "theano-users" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/theano-users/Q9tD4Af_7ho/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: Is CuDnn 5105 supported ?

2016-11-09 Thread Michael Klachko
I'm a little confused regarding the two backends. Is there any
recommendation to when we should be using one or the other? I installed
both on my Ubuntu machine, and I don't even know which one I'm using for my
convnets. How can I tell? Or how can I configure it to use one or another?

On Wed, Nov 9, 2016 at 8:48 AM, Frédéric Bastien  wrote:

> It could be that the new back-end (libgpuarray) is faster and more
> efficient in that cases. So just use that back-end :)
>
> The speed difference between both back-end isn't constant, but should be a
> little bit faster with the new back-end in average.
>
> We have found a few speed regression in the new back-end, but they where
> fixed. If you found one, just tell us and we'll fix it. But the probably is
> still low of having slowdown in the new back-end.
>
> We just merged one such fix with indexing. Make sure to update libgpuarray
> and recompile it if you want to be sure to have the fastest version.
>
> Fred
>
> On Tue, Nov 8, 2016 at 1:56 PM, Ragav Venkatesan <
> ragav.venkate...@gmail.com> wrote:
>
>> Ok, here is a problem I'm getting and I am not sure how to solve this. If
>> I use the libgpuarray backend on the cnn_tutorial I am getting a 98% gpu
>> tutilization with cudnn 5105. If I use cuda backend, I am only getting
>> about 35% utilization.
>> Anyidea why this might be so ?
>>
>> On Monday, October 24, 2016 at 9:38:17 AM UTC-7, nouiz wrote:
>>>
>>> What errors do you have? Delete your Theano cache, just in case and be
>>> sure to use Theano dev version. The last release don't support it I think.
>>>
>>> Fred
>>>
>>> On Mon, Oct 24, 2016 at 12:33 PM, Michael Klachko 
>>> wrote:
>>>
 Yes, it's supported, I'm using it right now (CUDA 8.0 on Ubuntu 14.04):

 >>> import theano
 Using gpu device 0: TITAN X (Pascal) (CNMeM is enabled with initial
 size: 30.0% of memory, cuDNN 5105)
 >>> print theano.__version__
 0.9.0dev3.dev-20fd30a38d34687e9d944140042762ca9fca6276





 On Saturday, October 22, 2016 at 2:54:00 PM UTC-7, Ragav Venkatesan
 wrote:
>
> I updated and I'm getting some weird errors. With Cuda backend,
> convolutions only run on CPU and with libgpuarray backend GPUs only run at
> about 35% util.
>
>
> --

 ---
 You received this message because you are subscribed to the Google
 Groups "theano-users" group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to theano-users...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

>>>
>>> --
>>
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "theano-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to theano-users+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
>
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "theano-users" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/theano-users/bSTnP3yLorw/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Any way to initialize shared variable with unknown shape?

2016-11-09 Thread Pascal Lamblin
Hi,

It is not possible to initialize a shared variable without a shape,
since it needs a value.

However, the shape of a shared variable is not fixed (only the number of
dimensions and dtype are).

So you could create a shared variable with shape (1, 1, ...) for
instance, and then either call set_value() or a function with
updates=... to properly initialize it when you actually know its shape.

On Wed, Nov 09, 2016, Peter O'Connor wrote:
> Hi all, I'm implementing a "temporal difference", which is just this:
> 
> class TemporalDifference(object):
> 
> def __init__(self, shape):
> self.old = theano.shared(np.zeros(shape))
> 
> def __call__(self, data):
> diff = data - self.old
> add_update(self.old, data)
> return diff
> 
> 
> It would be really nice not to have to pass in the shape in advance, as it 
> can be a bit difficult to figure out sometimes.  Is there some way to do 
> this without having to know the shape in advance?
> 
> Thanks.
> 
> -- 
> 
> --- 
> You received this message because you are subscribed to the Google Groups 
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.


-- 
Pascal

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: Is CuDnn 5105 supported ?

2016-11-09 Thread Frédéric Bastien
It could be that the new back-end (libgpuarray) is faster and more
efficient in that cases. So just use that back-end :)

The speed difference between both back-end isn't constant, but should be a
little bit faster with the new back-end in average.

We have found a few speed regression in the new back-end, but they where
fixed. If you found one, just tell us and we'll fix it. But the probably is
still low of having slowdown in the new back-end.

We just merged one such fix with indexing. Make sure to update libgpuarray
and recompile it if you want to be sure to have the fastest version.

Fred

On Tue, Nov 8, 2016 at 1:56 PM, Ragav Venkatesan  wrote:

> Ok, here is a problem I'm getting and I am not sure how to solve this. If
> I use the libgpuarray backend on the cnn_tutorial I am getting a 98% gpu
> tutilization with cudnn 5105. If I use cuda backend, I am only getting
> about 35% utilization.
> Anyidea why this might be so ?
>
> On Monday, October 24, 2016 at 9:38:17 AM UTC-7, nouiz wrote:
>>
>> What errors do you have? Delete your Theano cache, just in case and be
>> sure to use Theano dev version. The last release don't support it I think.
>>
>> Fred
>>
>> On Mon, Oct 24, 2016 at 12:33 PM, Michael Klachko 
>> wrote:
>>
>>> Yes, it's supported, I'm using it right now (CUDA 8.0 on Ubuntu 14.04):
>>>
>>> >>> import theano
>>> Using gpu device 0: TITAN X (Pascal) (CNMeM is enabled with initial
>>> size: 30.0% of memory, cuDNN 5105)
>>> >>> print theano.__version__
>>> 0.9.0dev3.dev-20fd30a38d34687e9d944140042762ca9fca6276
>>>
>>>
>>>
>>>
>>>
>>> On Saturday, October 22, 2016 at 2:54:00 PM UTC-7, Ragav Venkatesan
>>> wrote:

 I updated and I'm getting some weird errors. With Cuda backend,
 convolutions only run on CPU and with libgpuarray backend GPUs only run at
 about 35% util.


 --
>>>
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "theano-users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to theano-users...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] index out of bounds problem

2016-11-09 Thread Pascal Lamblin
On Tue, Nov 08, 2016, Jiali Zhou wrote:
> Dear Pascal,
> Thanks for your reply. I've tried to look into the self.inp_c, but it gives 
> error like this:
> theano.gof.fg.MissingInputError: An input of the graph, used to compute 
> Subtensor{int64::}(input_var, Constant{0}), was not provided and not given 
> a value.Use the Theano flag exception_verbosity='high',for more information 
> on this error.

What are you doing exactly that causes that error?

> And plus, InplaceDimShuffle{1,2,0}.0 is supposed to get the first dimension 
> right? Since it shuffles the dimension inplace first. So actually it should 
> gives 40 instead of 0.
> Am I on the wrong way? Thanks!

It looks like self.inpc_c may be the output of the dimshuffle.
In that case, its shape would be (0, 40, 5), but the original variable would be 
(5, 0, 40).

> On Monday, November 7, 2016 at 10:57:22 PM UTC-5, Pascal Lamblin wrote:
> >
> > The error message is telling you that the corresponding computation in 
> > the symbolic graph is at the line: 
> >
> > > outputs_info=T.zeros_like(self.inp_c[0][0])) 
> >
> > Since the corresponding node is 
> >
> > > Subtensor{int64}(InplaceDimShuffle{1,2,0}.0, Constant{0}) 
> >
> > It is likely that the problem occurred when computing "self.inp_c[0]". 
> >
> > The error message also gives you the shape of the indexed variable 
> > (self.inp_c, probably), which is (0, 40, 5). 
> >
> > So the issue is that you are trying to take the first element of an 
> > empty array, which is out of bounds. 
> >
> > On Mon, Nov 07, 2016, Jiali Zhou wrote: 
> > >   
> > > Dear all, 
> > > I am new to theano and trying to reproduce the result from YerevaNN of 
> > the 
> > > dmn_qa_draft.py using who did what dataset. 
> > > However, the codes give following errors. 
> > > if mode == 'train': 
> > > gradient_value = self.get_gradient_fn(inp, q, ans, ca, cb, 
> > cc, 
> > > cd, ce, input_mask) 
> > > ''' 
> > > if self.mode == 'train': 
> > > print "==> computing gradients (for debugging)" 
> > > gradient = T.grad(self.loss, self.params) 
> > > self.get_gradient_fn = theano.function(inputs=[self.inp_var, 
> > > self.q_var, self.ans_var, 
> > > self.ca_var, 
> > > self.cb_var, self.cc_var, self.cd_var, self.ce_var, 
> > > 
> > self.input_mask_var], 
> > > outputs=gradient) 
> > > 
> > > ''' 
> > > I googled online and found that the error is due to index out of bounds 
> > of 
> > > outputs if I am right. But here theano.function will give out a 
> > gradient. 
> > > So what is the exact problem here? 
> > > Any suggestions will be appreciated. 
> > > 
> > > File 
> > > 
> > "/Users/baymax/anaconda2/lib/python2.7/site-packages/theano/compile/function_module.py",
> >  
> >
> > > line 873, in __call__ 
> > > 
> > > self.fn() if output_subset is None else\ 
> > > 
> > > IndexError: index out of bounds 
> > > 
> > > Apply node that caused the error: 
> > > Subtensor{int64}(InplaceDimShuffle{1,2,0}.0, Constant{0}) 
> > > 
> > > Toposort index: 1057 
> > > 
> > > Inputs types: [TensorType(float32, 3D), Scalar(int64)] 
> > > 
> > > Inputs shapes: [(0, 40, 5), ()] 
> > > 
> > > Inputs strides: [(160, 4, 4), ()] 
> > > 
> > > Inputs values: [array([], shape=(0, 40, 5), dtype=float32), 0] 
> > > 
> > > Inputs type_num: [11, 7] 
> > > 
> > > Outputs clients: [[Subtensor{int64}(Subtensor{int64}.0, Constant{0})]] 
> > > 
> > > 
> > > 
> > > 
> > > Backtrace when the node is created(use Theano flag traceback.limit=N to 
> > > make it longer): 
> > > 
> > > File "main.py", line 64, in  
> > > 
> > > dmn = dmn_qa.DMN_qa(**args_dict) 
> > > 
> > > File "/Users/baymax/Desktop/nlp/proj/dmn/dmn_qa.py", line 114, in 
> > __init__ 
> > > 
> > > current_episode = self.new_episode(memory[iter - 1]) 
> > > 
> > > File "/Users/baymax/Desktop/nlp/proj/dmn/dmn_qa.py", line 248, in 
> > > new_episode 
> > > 
> > > outputs_info=T.zeros_like(self.inp_c[0][0])) 
> > > 
> > > -- 
> > > 
> > > --- 
> > > You received this message because you are subscribed to the Google 
> > Groups "theano-users" group. 
> > > To unsubscribe from this group and stop receiving emails from it, send 
> > an email to theano-users...@googlegroups.com . 
> > > For more options, visit https://groups.google.com/d/optout. 
> >
> >
> >
> > -- 
> > Pascal 
> >
> 
> -- 
> 
> --- 
> You received this message because you are subscribed to the Google Groups 
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.


-- 
Pascal

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, v

[theano-users] Any way to initialize shared variable with unknown shape?

2016-11-09 Thread Peter O'Connor
Hi all, I'm implementing a "temporal difference", which is just this:

class TemporalDifference(object):

def __init__(self, shape):
self.old = theano.shared(np.zeros(shape))

def __call__(self, data):
diff = data - self.old
add_update(self.old, data)
return diff


It would be really nice not to have to pass in the shape in advance, as it 
can be a bit difficult to figure out sometimes.  Is there some way to do 
this without having to know the shape in advance?

Thanks.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Question about LeNet-5 hidden layer as implemented on deeplearning.net

2016-11-09 Thread Frédéric Bastien
I'm not sure I understand the question correctly. But I think the answer is
yes.

Hidden layer in an MLP are fully connected layers.

Fred

On Wed, Nov 9, 2016 at 7:26 AM, Beatriz G.  wrote:

> Hi,
>
> Could be used a independant hidden layer of mlp as a fully connected layer?
>
> regards.
>
>
> El viernes, 13 de marzo de 2015, 18:25:38 (UTC+1), Pascal Lamblin escribió:
>>
>> Hi,
>>
>> On Fri, Mar 13, 2015, Orry Messer wrote:
>> > I don't quite understand the structure of the network after the second
>> > convolution/pooling layer and just before the hidden layer.
>> > I think what is confusing me is the batch aspect of it.
>> > With a batch size of 500, the hidden layer will have 500 units. This
>> much
>> > I'm ok with.
>>
>> That's not right... by coincidence, the batch size in 500, which is
>> also the number of output units of that layer. Each of these units will
>> compute a different value for each of the examples in the batch, so the
>> output of that layer will be (batch_size, n_outputs) or (500, 500).
>>
>> > But what is the input to this hidden layer? In the comments it says
>> that
>> > the hidden layer operates on matrices of size (batch_size, pixel_size),
>> > which in the case of this code
>> > is (500, 800).
>>
>> That's correct.
>>
>> > Does this then mean that each hidden unit has 500*800 inputs?
>>
>> Not really. Each hidden unit has 800 scalar inputs. Each of these inputs
>> will take a different value for each example of the minibatch, and the
>> neuron will also output one value for each example.
>>
>> > And since there are 500 hidden units, does this then mean that there
>> > are a total of 500*(500*800) connections and as many weights to tune
>> > in this layer alone?
>>
>> No, the weights are the same for all the examples of the minibatch.
>>
>> --
>> Pascal
>>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Question about LeNet-5 hidden layer as implemented on deeplearning.net

2016-11-09 Thread Beatriz G.
Hi, 

Could be used a independant hidden layer of mlp as a fully connected layer?

regards.

El viernes, 13 de marzo de 2015, 18:25:38 (UTC+1), Pascal Lamblin escribió:
>
> Hi, 
>
> On Fri, Mar 13, 2015, Orry Messer wrote: 
> > I don't quite understand the structure of the network after the second 
> > convolution/pooling layer and just before the hidden layer. 
> > I think what is confusing me is the batch aspect of it. 
> > With a batch size of 500, the hidden layer will have 500 units. This 
> much 
> > I'm ok with. 
>
> That's not right... by coincidence, the batch size in 500, which is 
> also the number of output units of that layer. Each of these units will 
> compute a different value for each of the examples in the batch, so the 
> output of that layer will be (batch_size, n_outputs) or (500, 500). 
>
> > But what is the input to this hidden layer? In the comments it says that 
> > the hidden layer operates on matrices of size (batch_size, pixel_size), 
> > which in the case of this code 
> > is (500, 800). 
>
> That's correct. 
>
> > Does this then mean that each hidden unit has 500*800 inputs? 
>
> Not really. Each hidden unit has 800 scalar inputs. Each of these inputs 
> will take a different value for each example of the minibatch, and the 
> neuron will also output one value for each example. 
>
> > And since there are 500 hidden units, does this then mean that there 
> > are a total of 500*(500*800) connections and as many weights to tune 
> > in this layer alone? 
>
> No, the weights are the same for all the examples of the minibatch. 
>
> -- 
> Pascal 
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: response normalization layer

2016-11-09 Thread Beatriz G.
I just want add that the response-normalization that I woudl like to apply 
is the local one.

Regards

El martes, 8 de noviembre de 2016, 12:30:49 (UTC+1), Beatriz G. escribió:
>
> Also, i do not know if it is possible, in the conv2d layer class  add 
> something to make the output response-normalized. Any help would be welcome.
>
> Thank you.
>
> El martes, 8 de noviembre de 2016, 12:21:59 (UTC+1), Beatriz G. escribió:
>>
>> Anyone knows if there is in theano something that works as a response 
>> normalization layer? If I have to implemet it an anyone knows how to do it, 
>> any guide would be welcome.
>>
>> Regards. 
>>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.