[theano-users] Re: Error performing 2D convolution on Binomial distribution sample (gemm error)

2017-07-18 Thread Jesse Livezey
FYI, I created an issue to improve the error messages
https://github.com/Theano/Theano/issues/6167

On Tuesday, July 18, 2017 at 12:18:53 PM UTC-7, Jesse Livezey wrote:
>
> The conv2d operation doesn't support int64 (th_sampled) and it looks like 
> it doesn't fail gracefully with a sensible error message when the GpuCorrMM 
> op is used.
>
> If you cast th_sampled to float32 it should work fine. You'll also need to 
> cast kernel.
>
> On Tuesday, July 18, 2017 at 2:07:36 AM UTC-7, David Anderson wrote:
>>
>> Hi there!
>>
>> I'm implementing a convolutional operation and I'm getting an unexpected 
>> error when I try perform a convolution on a Binomial sampled tensor.
>>
>> The error is: 
>> RuntimeError: GpuCorrMM forward encountered an error running gemm: 5
>>
>> The error can be re-created with the following code (At least on my 
>> machine it can):
>>
>> import numpy as np
>> import theano as th
>> from theano import tensor as T
>> from theano.tensor.shared_randomstreams import RandomStreams
>>
>> rng = np.random.RandomState()
>> theano_rng = RandomStreams(rng.randint(2 ** 30))
>>
>> th_input = T.tensor4()
>> th_filter = T.tensor4()
>>
>> th_sampled = theano_rng.binomial(size=th_input.shape, n=1, p=th_input)
>> th_output = T.nnet.conv2d(th_sampled, th_filter)
>>
>> op = th.function(
>> inputs=[th_input, th_filter],
>> outputs=th_output
>> )
>>
>> input_sample = np.random.rand(1, 1, 28, 28)
>> kernel = np.random.rand(1, 1, 6, 6)
>>
>> op(input_sample, kernel)
>>
>>
>> Interestingly, the error is NOT shown for other distribution samples, 
>> like theano_rng.normal(), which has type RandomFunction{normal}.1 
>> instead of RandomFunction{binomial}.1
>>
>> For what its worth, my THEANO_FLAGS are as follows:
>> floatX=float64,device=cuda,nvcc.flags=-D_FORCE_INLINES,
>> exception_verbosity=high
>>
>> The rest of the stack trace is as follows:
>> Traceback (most recent call last):
>>   File "tmp2.py", line 23, in 
>> op(input_sample, kernel)
>>   File 
>> "/home/dave/miniconda2/lib/python2.7/site-packages/theano/compile/function_module.py",
>>  
>> line 898, in __call__
>> storage_map=getattr(self.fn, 'storage_map', None))
>>   File 
>> "/home/dave/miniconda2/lib/python2.7/site-packages/theano/gof/link.py", 
>> line 325, in raise_with_op
>> reraise(exc_type, exc_value, exc_trace)
>>   File 
>> "/home/dave/miniconda2/lib/python2.7/site-packages/theano/compile/function_module.py",
>>  
>> line 884, in __call__
>> self.fn() if output_subset is None else\
>> RuntimeError: GpuCorrMM forward encountered an error running gemm: 5
>> Apply node that caused the error: GpuCorrMM{valid, (1, 1), (1, 
>> 1)}(GpuContiguous.0, GpuContiguous.0)
>> Toposort index: 11
>> Inputs types: [GpuArrayType(int64, (False, False, False, False)), 
>> GpuArrayType(float64, (False, False, False, False))]
>> Inputs shapes: [(1, 1, 28, 28), (1, 1, 6, 6)]
>> Inputs strides: [(6272, 6272, 224, 8), (288, 288, 48, 8)]
>> Inputs values: ['not shown', 'not shown']
>> Inputs type_num: [7, 12]
>> Outputs clients: [[HostFromGpu(gpuarray)(GpuCorrMM{valid, (1, 1), (1, 
>> 1)}.0)]]
>>
>> Debugprint of the apply node: 
>> GpuCorrMM{valid, (1, 1), (1, 1)} [id A] > (False, False, False, False))> ''   
>>  |GpuContiguous [id B] > False))> ''   
>>  | |GpuFromHost [id C] > False, False))> ''   
>>  |   |RandomFunction{binomial}.1 [id D]  ''   
>>  | | [id E] 
>>  | |MakeVector{dtype='int64'} [id F]  ''   
>>  | | |Shape_i{0} [id G]  ''   
>>  | | | | [id H] 
>>  | | |Shape_i{1} [id I]  ''   
>>  | | | | [id H] 
>>  | | |Shape_i{2} [id J]  ''   
>>  | | | | [id H] 
>>  | | |Shape_i{3} [id K]  ''   
>>  | |   | [id H] 
>>  | |TensorConstant{1} [id L] 
>>  | | [id H] 
>>  |GpuContiguous [id M] > False))> ''   
>>|GpuFromHost [id N] > False, False))> ''   
>>  |Subtensor{::, ::, ::int64, ::int64} [id O] > 4D)> ''   
>>| [id P] 
>>|Constant{-1} [id Q] 
>>|Constant{-1} [id Q] 
>>
>> Storage map footprint:
>>  - GpuContiguous.0, Shape: (1, 1, 28, 28), ElemSize: 8 Byte(s), 
>> TotalSize: 6272 Byte(s)
>>  - , Input, Shape: (1, 1, 28, 28), ElemSize: 8 
>> Byte(s), TotalSize: 6272 Byte(s)
>>  - GpuContiguous.0, Shape: (1, 1, 6, 6), ElemSize: 8 Byte(s), 

[theano-users] Re: Error performing 2D convolution on Binomial distribution sample (gemm error)

2017-07-18 Thread Jesse Livezey
The conv2d operation doesn't support int64 (th_sampled) and it looks like 
it doesn't fail gracefully with a sensible error message when the GpuCorrMM 
op is used.

If you cast th_sampled to float32 it should work fine. You'll also need to 
cast kernel.

On Tuesday, July 18, 2017 at 2:07:36 AM UTC-7, David Anderson wrote:
>
> Hi there!
>
> I'm implementing a convolutional operation and I'm getting an unexpected 
> error when I try perform a convolution on a Binomial sampled tensor.
>
> The error is: 
> RuntimeError: GpuCorrMM forward encountered an error running gemm: 5
>
> The error can be re-created with the following code (At least on my 
> machine it can):
>
> import numpy as np
> import theano as th
> from theano import tensor as T
> from theano.tensor.shared_randomstreams import RandomStreams
>
> rng = np.random.RandomState()
> theano_rng = RandomStreams(rng.randint(2 ** 30))
>
> th_input = T.tensor4()
> th_filter = T.tensor4()
>
> th_sampled = theano_rng.binomial(size=th_input.shape, n=1, p=th_input)
> th_output = T.nnet.conv2d(th_sampled, th_filter)
>
> op = th.function(
> inputs=[th_input, th_filter],
> outputs=th_output
> )
>
> input_sample = np.random.rand(1, 1, 28, 28)
> kernel = np.random.rand(1, 1, 6, 6)
>
> op(input_sample, kernel)
>
>
> Interestingly, the error is NOT shown for other distribution samples, like 
> theano_rng.normal(), 
> which has type RandomFunction{normal}.1 instead 
> of RandomFunction{binomial}.1
>
> For what its worth, my THEANO_FLAGS are as follows:
> floatX=float64,device=cuda,nvcc.flags=-D_FORCE_INLINES,exception_verbosity
> =high
>
> The rest of the stack trace is as follows:
> Traceback (most recent call last):
>   File "tmp2.py", line 23, in 
> op(input_sample, kernel)
>   File 
> "/home/dave/miniconda2/lib/python2.7/site-packages/theano/compile/function_module.py",
>  
> line 898, in __call__
> storage_map=getattr(self.fn, 'storage_map', None))
>   File 
> "/home/dave/miniconda2/lib/python2.7/site-packages/theano/gof/link.py", 
> line 325, in raise_with_op
> reraise(exc_type, exc_value, exc_trace)
>   File 
> "/home/dave/miniconda2/lib/python2.7/site-packages/theano/compile/function_module.py",
>  
> line 884, in __call__
> self.fn() if output_subset is None else\
> RuntimeError: GpuCorrMM forward encountered an error running gemm: 5
> Apply node that caused the error: GpuCorrMM{valid, (1, 1), (1, 
> 1)}(GpuContiguous.0, GpuContiguous.0)
> Toposort index: 11
> Inputs types: [GpuArrayType(int64, (False, False, False, False)), 
> GpuArrayType(float64, (False, False, False, False))]
> Inputs shapes: [(1, 1, 28, 28), (1, 1, 6, 6)]
> Inputs strides: [(6272, 6272, 224, 8), (288, 288, 48, 8)]
> Inputs values: ['not shown', 'not shown']
> Inputs type_num: [7, 12]
> Outputs clients: [[HostFromGpu(gpuarray)(GpuCorrMM{valid, (1, 1), (1, 
> 1)}.0)]]
>
> Debugprint of the apply node: 
> GpuCorrMM{valid, (1, 1), (1, 1)} [id A]  False, False, False))> ''   
>  |GpuContiguous [id B]  False))> ''   
>  | |GpuFromHost [id C]  False, False))> ''   
>  |   |RandomFunction{binomial}.1 [id D]  ''   
>  | | [id E] 
>  | |MakeVector{dtype='int64'} [id F]  ''   
>  | | |Shape_i{0} [id G]  ''   
>  | | | | [id H] 
>  | | |Shape_i{1} [id I]  ''   
>  | | | | [id H] 
>  | | |Shape_i{2} [id J]  ''   
>  | | | | [id H] 
>  | | |Shape_i{3} [id K]  ''   
>  | |   | [id H] 
>  | |TensorConstant{1} [id L] 
>  | | [id H] 
>  |GpuContiguous [id M]  False))> ''   
>|GpuFromHost [id N]  False, False))> ''   
>  |Subtensor{::, ::, ::int64, ::int64} [id O]  
> ''   
>| [id P] 
>|Constant{-1} [id Q] 
>|Constant{-1} [id Q] 
>
> Storage map footprint:
>  - GpuContiguous.0, Shape: (1, 1, 28, 28), ElemSize: 8 Byte(s), TotalSize: 
> 6272 Byte(s)
>  - , Input, Shape: (1, 1, 28, 28), ElemSize: 8 
> Byte(s), TotalSize: 6272 Byte(s)
>  - GpuContiguous.0, Shape: (1, 1, 6, 6), ElemSize: 8 Byte(s), TotalSize: 
> 288 Byte(s)
>  - , Input, Shape: (1, 1, 6, 6), ElemSize: 8 
> Byte(s), TotalSize: 288 Byte(s)
>  - Constant{-1}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
>  - TensorConstant{1}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0 
> Byte(s)
>  

[theano-users] WARNING (theano.gof.cmodule): OPTIMIZATION WARNING: Theano was not able to find the default g++ parameters. This is needed to tune the compilation to your specific CPU. This can slow do

2017-07-18 Thread mai Khai
Dear group,
I ran into this problem when i ran a program using theano:
"WARNING (theano.gof.cmodule): OPTIMIZATION WARNING: Theano was not able to 
find the default g++ parameters. This is needed to tune the compilation to 
your specific CPU. This can slow down the execution of Theano functions. 
Please submit the following lines to Theano's mailing list so that we can 
fix this problem:"
Here is the content to submit: 

 ['# 1 ""\n', '# 1 ""\n', '# 1 ""\n', '# 1 
"/usr/include/stdc-predef.h" 1 3 4\n', '# 1 "" 2\n', '# 1 
""\n', 'Using built-in specs.\n', 'COLLECT_GCC=/usr/bin/g++\n', 
'Target: aarch64-linux-gnu\n', "Configured with: ../src/configure -v 
--with-pkgversion='Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.4' 
--with-bugurl=file:///usr/share/doc/gcc-5/README.Bugs 
--enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr 
--program-suffix=-5 --enable-shared --enable-linker-build-id 
--libexecdir=/usr/lib --without-included-gettext --enable-threads=posix 
--libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu 
--enable-libstdcxx-debug --enable-libstdcxx-time=yes 
--with-default-libstdcxx-abi=new --enable-gnu-unique-object 
--disable-libquadmath --enable-plugin --with-system-zlib 
--disable-browser-plugin --enable-java-awt=gtk --enable-gtk-cairo 
--with-java-home=/usr/lib/jvm/java-1.5.0-gcj-5-arm64/jre --enable-java-home 
--with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-5-arm64 
--with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-5-arm64 
--with-arch-directory=aarch64 
--with-ecj-jar=/usr/share/java/eclipse-ecj.jar --enable-multiarch 
--enable-fix-cortex-a53-843419 --disable-werror --enable-checking=release 
--build=aarch64-linux-gnu --host=aarch64-linux-gnu 
--target=aarch64-linux-gnu\n", 'Thread model: posix\n', 'gcc version 5.4.0 
20160609 (Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.4) \n', 
"COLLECT_GCC_OPTIONS='-E' '-v' '-shared-libgcc' '-mlittle-endian' 
'-mabi=lp64'\n", ' /usr/lib/gcc/aarch64-linux-gnu/5/cc1 -E -quiet -v 
-imultiarch aarch64-linux-gnu - -mlittle-endian -mabi=lp64 
-fstack-protector-strong -Wformat -Wformat-security\n', 'ignoring 
nonexistent directory "/usr/local/include/aarch64-linux-gnu"\n', 'ignoring 
nonexistent directory 
"/usr/lib/gcc/aarch64-linux-gnu/5/../../../../aarch64-linux-gnu/include"\n', 
'#include "..." search starts here:\n', '#include <...> search starts 
here:\n', ' /usr/lib/gcc/aarch64-linux-gnu/5/include\n', ' 
/usr/local/include\n', ' /usr/lib/gcc/aarch64-linux-gnu/5/include-fixed\n', 
' /usr/include/aarch64-linux-gnu\n', ' /usr/include\n', 'End of search 
list.\n', 
'COMPILER_PATH=/usr/lib/gcc/aarch64-linux-gnu/5/:/usr/lib/gcc/aarch64-linux-gnu/5/:/usr/lib/gcc/aarch64-linux-gnu/:/usr/lib/gcc/aarch64-linux-gnu/5/:/usr/lib/gcc/aarch64-linux-gnu/\n',
 
'LIBRARY_PATH=/usr/lib/gcc/aarch64-linux-gnu/5/:/usr/lib/gcc/aarch64-linux-gnu/5/../../../aarch64-linux-gnu/:/usr/lib/gcc/aarch64-linux-gnu/5/../../../../lib/:/lib/aarch64-linux-gnu/:/lib/../lib/:/usr/lib/aarch64-linux-gnu/:/usr/lib/../lib/:/usr/lib/gcc/aarch64-linux-gnu/5/../../../:/lib/:/usr/lib/\n',
 
"COLLECT_GCC_OPTIONS='-E' '-v' '-shared-libgcc' '-mlittle-endian' 
'-mabi=lp64'\n"]
error at line: 4592

Best regard,
Khai 

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: Momentum term

2017-07-18 Thread Beatriz G.
Hi everyone.

I would like to know what is momentum used for, I think that has something 
to do with the weights updates, I have been reading information but I do 
not understand at all. It has something to do with the dynamic learning 
rate?

Regards.


El jueves, 6 de marzo de 2014, 17:25:01 (UTC+1), Al Docherty escribió:
>
> Hello again,
>
> I'm considering adding momentum to my neural network implementation. The 
> gradients and updates are calculated as so:
>
> ### OBTAIN PARAMETERS AND GRADIENTS 
>   gparams = []
>   for param in classifier.params:
> gparam = T.grad(printcost, param)
> gparams.append(gparam)
> 
>   ### CALCULATE CHANGE IN WEIGHTS
>   updates = [] 
>   for param, gparam in zip(classifier.params, gparams):
> updates.append((param, param-eta * gparam))
>
>
> I know I need to add the momentum term to the updates.append line. But how 
> do I store an old set of gradients? 
>
> Al
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Error performing 2D convolution on Binomial distribution sample (gemm error)

2017-07-18 Thread David Anderson
Hi there!

I'm implementing a convolutional operation and I'm getting an unexpected 
error when I try perform a convolution on a Binomial sampled tensor.

The error is: 
RuntimeError: GpuCorrMM forward encountered an error running gemm: 5

The error can be re-created with the following code (At least on my machine 
it can):

import numpy as np
import theano as th
from theano import tensor as T
from theano.tensor.shared_randomstreams import RandomStreams

rng = np.random.RandomState()
theano_rng = RandomStreams(rng.randint(2 ** 30))

th_input = T.tensor4()
th_filter = T.tensor4()

th_sampled = theano_rng.binomial(size=th_input.shape, n=1, p=th_input)
th_output = T.nnet.conv2d(th_sampled, th_filter)

op = th.function(
inputs=[th_input, th_filter],
outputs=th_output
)

input_sample = np.random.rand(1, 1, 28, 28)
kernel = np.random.rand(1, 1, 6, 6)

op(input_sample, kernel)


Interestingly, the error is NOT shown for other distribution samples, like 
theano_rng.normal(), 
which has type RandomFunction{normal}.1 instead 
of RandomFunction{binomial}.1

For what its worth, my THEANO_FLAGS are as follows:
floatX=float64,device=cuda,nvcc.flags=-D_FORCE_INLINES,exception_verbosity=
high

The rest of the stack trace is as follows:
Traceback (most recent call last):
  File "tmp2.py", line 23, in 
op(input_sample, kernel)
  File 
"/home/dave/miniconda2/lib/python2.7/site-packages/theano/compile/function_module.py",
 
line 898, in __call__
storage_map=getattr(self.fn, 'storage_map', None))
  File 
"/home/dave/miniconda2/lib/python2.7/site-packages/theano/gof/link.py", 
line 325, in raise_with_op
reraise(exc_type, exc_value, exc_trace)
  File 
"/home/dave/miniconda2/lib/python2.7/site-packages/theano/compile/function_module.py",
 
line 884, in __call__
self.fn() if output_subset is None else\
RuntimeError: GpuCorrMM forward encountered an error running gemm: 5
Apply node that caused the error: GpuCorrMM{valid, (1, 1), (1, 
1)}(GpuContiguous.0, GpuContiguous.0)
Toposort index: 11
Inputs types: [GpuArrayType(int64, (False, False, False, False)), 
GpuArrayType(float64, (False, False, False, False))]
Inputs shapes: [(1, 1, 28, 28), (1, 1, 6, 6)]
Inputs strides: [(6272, 6272, 224, 8), (288, 288, 48, 8)]
Inputs values: ['not shown', 'not shown']
Inputs type_num: [7, 12]
Outputs clients: [[HostFromGpu(gpuarray)(GpuCorrMM{valid, (1, 1), (1, 
1)}.0)]]

Debugprint of the apply node: 
GpuCorrMM{valid, (1, 1), (1, 1)} [id A]  ''   
 |GpuContiguous [id B]  ''   
 | |GpuFromHost [id C]  ''   
 |   |RandomFunction{binomial}.1 [id D]  ''   
 | | [id E] 
 | |MakeVector{dtype='int64'} [id F]  ''   
 | | |Shape_i{0} [id G]  ''   
 | | | | [id H] 
 | | |Shape_i{1} [id I]  ''   
 | | | | [id H] 
 | | |Shape_i{2} [id J]  ''   
 | | | | [id H] 
 | | |Shape_i{3} [id K]  ''   
 | |   | [id H] 
 | |TensorConstant{1} [id L] 
 | | [id H] 
 |GpuContiguous [id M]  ''   
   |GpuFromHost [id N]  ''   
 |Subtensor{::, ::, ::int64, ::int64} [id O]  
''   
   | [id P] 
   |Constant{-1} [id Q] 
   |Constant{-1} [id Q] 

Storage map footprint:
 - GpuContiguous.0, Shape: (1, 1, 28, 28), ElemSize: 8 Byte(s), TotalSize: 
6272 Byte(s)
 - , Input, Shape: (1, 1, 28, 28), ElemSize: 8 
Byte(s), TotalSize: 6272 Byte(s)
 - GpuContiguous.0, Shape: (1, 1, 6, 6), ElemSize: 8 Byte(s), TotalSize: 
288 Byte(s)
 - , Input, Shape: (1, 1, 6, 6), ElemSize: 8 
Byte(s), TotalSize: 288 Byte(s)
 - Constant{-1}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
 - TensorConstant{1}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)
 TotalSize: 13129.0 Byte(s) 0.000 GB
 TotalSize inputs: 6569.0 Byte(s) 0.000 GB

Am I doing something wrong here? Any idea how I might get around this? It 
works if i split up the code into two functions, one that does the sampling 
and returns out the tensor, and then one that takes in this result and does 
the convolution. But it'd be really stupid to pass the value back out to 
the CPU RAM from the GPU RAM just to get around this...

Any advice would be hugely appreciated!

Cheers,
Dave

-- 

--- 
You received this message because you are