[theano-users] ImportError undefined symbol: _ZdlPvm possible - compiler problem

2017-03-13 Thread Jonathan Bruck
Hi 

Running Ubuntu 16.10, installed anaconda3, theano from conda-forge. Nothing 
works due to the errors below.
I've tried various options for the cxx flag, g++-5 g++-4.9 /usr/bin/g++-4.8 
etc
The theanorc file is recognized as it correctly picks up the gpu when that 
is enabled, but still fails to compile.

Any help please :)

Jonathan

jonathan@melange:~$ python
Python 3.5.2 |Anaconda 4.3.0 (64-bit)| (default, Jul  2 2016, 17:53:06) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import theano
>>> theano.__version__
'0.8.2'
>>> 
jonathan@melange:~$ cat .theanorc
[global]
floatX = float32
# device = gpu

[g++]
cxx = /usr/bin/x86_64-linux-gnu-g++-4.8

[lib]
cnmem = 0.7
jonathan@melange:~$ python `python -c "import os, theano; 
print(os.path.dirname(theano.__file__))"`/misc/check_blas.py

Some Theano flags:
blas.ldflags= -L/usr/local/lib -lopenblas -lopenblas
compiledir= /home/jonathan/.theano/compiledir_Linux-4.8--generic-x86_64-
with-debian-stretch-sid-x86_64-3.5.2-64
floatX= float32
device= cpu
Some OS information:
sys.platform= linux
sys.version= 3.5.2 |Anaconda 4.3.0 (64-bit)| (default, Jul  2 2016, 17:
53:06) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]
sys.prefix= /home/jonathan/Apps/Anaconda3
Some environment variables:
MKL_NUM_THREADS= None
OMP_NUM_THREADS= None
GOTO_NUM_THREADS= None


Numpy config: (used when the Theano flag "blas.ldflags" is empty)
openblas_lapack_info:
language = c
define_macros = [('HAVE_CBLAS', None)]
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
lapack_opt_info:
language = c
define_macros = [('HAVE_CBLAS', None)]
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
blas_mkl_info:
  NOT AVAILABLE
blis_info:
  NOT AVAILABLE
openblas_info:
language = c
define_macros = [('HAVE_CBLAS', None)]
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
blas_opt_info:
language = c
define_macros = [('HAVE_CBLAS', None)]
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
lapack_mkl_info:
  NOT AVAILABLE
Numpy dot module: numpy.core.multiarray
Numpy location: /home/jonathan/Apps/Anaconda3/lib/python3.5/site-packages/
numpy/__init__.py
Numpy version: 1.12.0
ERROR (theano.gof.opt): Optimization failure due to: constant_folding
ERROR (theano.gof.opt): node: DimShuffle{x,x}(TensorConstant{
0.80011920929})
ERROR (theano.gof.opt): TRACEBACK:
ERROR (theano.gof.opt): Traceback (most recent call last):
  File 
"/home/jonathan/Apps/Anaconda3/lib/python3.5/site-packages/theano/gof/opt.py"
, line 1772, in process_node
replacements = lopt.transform(node)
  File 
"/home/jonathan/Apps/Anaconda3/lib/python3.5/site-packages/theano/tensor/opt.py"
, line 5825, in constant_folding
no_recycling=[])
  File 
"/home/jonathan/Apps/Anaconda3/lib/python3.5/site-packages/theano/gof/op.py"
, line 970, in make_thunk
no_recycling)
  File 
"/home/jonathan/Apps/Anaconda3/lib/python3.5/site-packages/theano/gof/op.py"
, line 879, in make_c_thunk
output_storage=node_output_storage)
  File 
"/home/jonathan/Apps/Anaconda3/lib/python3.5/site-packages/theano/gof/cc.py"
, line 1200, in make_thunk
keep_lock=keep_lock)
  File 
"/home/jonathan/Apps/Anaconda3/lib/python3.5/site-packages/theano/gof/cc.py"
, line 1143, in __compile__
keep_lock=keep_lock)
  File 
"/home/jonathan/Apps/Anaconda3/lib/python3.5/site-packages/theano/gof/cc.py"
, line 1595, in cthunk_factory
key=key, lnk=self, keep_lock=keep_lock)
  File 
"/home/jonathan/Apps/Anaconda3/lib/python3.5/site-packages/theano/gof/cmodule.py"
, line 1142, in module_from_key
module = lnk.compile_cmodule(location)
  File 
"/home/jonathan/Apps/Anaconda3/lib/python3.5/site-packages/theano/gof/cc.py"
, line 1506, in compile_cmodule
preargs=preargs)
  File 
"/home/jonathan/Apps/Anaconda3/lib/python3.5/site-packages/theano/gof/cmodule.py"
, line 2213, in compile_str
return dlimport(lib_filename)
  File 
"/home/jonathan/Apps/Anaconda3/lib/python3.5/site-packages/theano/gof/cmodule.py"
, line 299, in dlimport
rval = __import__(module_name, {}, {}, [module_name])
ImportError: /home/jonathan/.theano/compiledir_Linux-4.8--generic-x86_64-
with-debian-stretch-sid-x86_64-3.5.2-64/tmppn2ug2zy/
mdb219947724f79219f7dbd36f0f52c77.so: undefined symbol: _ZdlPvm



-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] custom elemwise Op not getting fused

2017-03-13 Thread Frédéric Bastien
WIth this code snippet, I'm not able to run it. There is other problems
with the code that just make it not work. For example, what is "int32" in
your code? I tried a few things, but it didn't run.

Can you give a full working example?

Why do you make do_constant_folding() return False?

There is a few reason that could desactivate the fusion of elemwise. But I
don't see one that would apply in your cases. One of them is not having c
code (you have). Another is if the node is used by more then 1 other node
in the graph. We don't want to duplicate computation, so we don't fuse
them. I would need a working example to investigate more.

Fred

On Thu, Mar 9, 2017 at 8:51 AM Adam Becker  wrote:

> Hi,
>
> I'm close to a working PoC for the generalized elemwise Op (CPU for now).
> However it appears the Op is not getting properly fused with other elemwise
> Ops.
>
> There are two new scalar Ops, ElemIdx and ElemAt, with respective Elemwise
> subclass: TensorIdx and TensorAt.
>
> The definitions of the new Ops:
>
> class ElemIdx(ScalarOp):
> '''
> This gives tensor indices along an axis. All the indices are computed
> on the fly during elemwise thus much less memory consumption.
> This operates on tensor object while able to fuse with elemwise
> This is similar to threadIdx.* in CUDA
>
> '''
> # TODO
> # - finish DOCS
> # - should be 0 inps -> 1 outs, like constant,
> #   however theano is not happy with 0 inps for now
> # - support negative axis
> # - make axis symbolic var?
> # - implement numpy.intp for output type?
> __props__ = ('axis',)
> nin = 1
> nout = 1
>
>
> def __init__(self, axis, **kwargs):
> super(ElemIdx, self).__init__(**kwargs)
> self.axis = axis
>
> def c_code(self, node, name, inputs, outputs, sub):
> inp, = inputs
> out, = outputs
> axis = self.axis
> # protect substitutions at Elemwise
> l_sub = '%(l_sub)s'
> r_sub = '%(r_sub)s'
> idx_var = 'IDX_%(inp)s_%(axis)d' % locals()
> code = '''
> #ifdef TENSOR_ELEMWISE
> %(out)s = %(l_sub)s%(idx_var)s%(r_sub)s;
> #endif
> ''' % locals()
> return code
>
> # TODO def c_code_contiguous(self):
> def c_code_cache_version(self):
> return (0,)
>
> def do_constant_folding(self, node):
> return False
>
> def output_types(self, *inp_types):
> return (int32,)
>
> class ElemAt(ScalarOp):
> '''
> Similar to adv. subtensor however works with elemwise.
> This is the opposite of ElemIdx
> '''
> # TODO finish DOCS
> nout = 1
>
>
> def __init__(self, ndim, **kwargs):
> super(ElemAt, self).__init__(**kwargs)
> self.nin = 1+ndim
>
> def c_code(self, node, name, inputs, outputs, sub):
> inp = inputs[0]
> out, = outputs
> idxs = inputs[1:]
> code = '%(out)s = %(inp)ster[' % locals()
> terms = []
> # protect nested substitutions at Elemwise
> l_sub = '%(l_sub)s'
> r_sub = '%(r_sub)s'
> for axis, idx in enumerate(idxs):
> strd_var = 'STRD_%(inp)s_%(axis)d' % locals()
> terms.append(
> '%(idx)s*%(l_sub)s%(strd_var)s%(r_sub)s' % locals())
> code += ' + '.join(terms) + '];\n'
> return '''
> #ifdef TENSOR_ELEMWISE
> %s
> #endif\n''' % code
>
> def c_code_cache_version(self):
> return (0,)
>
> def do_constant_folding(self, node):
> return False
>
> def output_types(self, inp_types):
> # pdb.set_trace()
> return inp_types[:1]
>
> class TensorIdx(Elemwise):
> # TODO DOCS
> __props__ = Elemwise.__props__
> def __init__(self, axis, **kwargs):
> super(TensorIdx, self).__init__(
> scalar_op=ElemIdx(axis),
> **kwargs)
>
> def __str__(self):
> name = 'idx' if self.name is None else self.name
> axis = self.scalar_op.axis
> return '%(name)s{%(axis)d}' % locals()
>
> def do_constant_folding(self, node):
> return False
>
> class TensorAt(Elemwise):
> # TODO DOCS
> __props__ = Elemwise.__props__
> def __init__(self, ndim, **kwargs):
> super(TensorAt, self).__init__(
> scalar_op=ElemAt(ndim),
> **kwargs)
>
> def __str__(self):
> name = 'at' if self.name is None else self.name
> ndim = self.scalar_op.nin - 1
> return '%(name)s{%(ndim)dD}' % locals()
>
> def do_constant_folding(self, node):
> return False
>
> def idx(x, axis):
> if not isinstance(axis, int):
> raise TypeError('axis must be integer')
> return TensorIdx(axis)(x)
>
> def at_idx(x, *idxs):
> return TensorAt(x.ndim)(x, *idxs)
>
> There are also many hacks done to elemwise.py and elemwise_cgen.py to make
> this work. (link to branch
> 

Re: [theano-users] Using nvprof with theano code on windows cmd - no profiling data recorded

2017-03-13 Thread Frédéric Bastien
On linux, this was working in the past:

nvprof python my_python_script.py

I just double checked and it work with the new back-end and the old
back-end with this as an example:

export THEANO_FLAGS=device=cuda ; nvprof python -c "import
theano;v=theano.tensor.fvector();f=theano.function([v], v+1); f([1])"

So the problem seem to be related that windows have different need for
this. Maybe you need to modify Theano to call the function you named. But
maybe it is simpler to install Linux?

If you try to modify Theano, make sure to modify the new back-end (with
device=cuda). We will get rid of the old one soon and it would be a waste
of time.

Frédéric

On Mon, Mar 13, 2017 at 6:14 PM  wrote:

> Hi there,
>
> I was wanting to use nvprof with theano code on windows command line.
>
> With CUDA code, I compile with:
>
> nvcc Test.cu -ccbin "C:\Program Files (x86)\Microsoft Visual Studio
> 10.0\VC" -o Test1 "C:\Program Files\NVIDIA GPU Computing
> Toolkit\CUDA\v7.5\lib\x64\cuda.lib" "C:\Program Files\NVIDIA GPU Computing
> Toolkit\CUDA\v7.5\lib\x64\cudart.lib"
>
> And then nvprof can be used:
>
> "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.5\bin\nvprof.exe"
> Test1.exe
>
> Test1.cu has to have cudaProfilerStop() at the end of main, and have
> #include cuda.h and #include cuda_profiler_api.h, and the linked libraries
> in the compile stage. If it doesn't, nvprof doesn't output any profiling
> information and gives a warning: Some profiling data are not recorded. Make
> sure cudaProfilerStop() or cuProfilerStop() is called before application
> exit to flush profile data.
>
> For a theano python code, I have a .theanorc file with:
> [nvcc]
> compiler_bindir=C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\bin
>
> [cuda]
> root=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.5.
>
> When I do:
>
> nvprof python theano_code.py
>
> The code runs but I don't get any profiling information and the same
> warning about requiring cudaProfilerStop() to flush. So my question is what
> function can I use in the theano code to achieve the same as
> cudaProfilerStop() and what libraries would I need to link in the theanorc
> file for the nvcc compile phase?  From what I've seen/googled it seems like
> nvprof should just work with the python/theano code and I've played around
> a bit with editing the theanorc file but I can't get it to work!
>
> Anyone got any tips please? Would be much appreciated!
> Thanks,
> Jo
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: theao install nvcc fatal : Unknown option 'fPIC'

2017-03-13 Thread Frédéric Bastien
Try Theano beta conda package

Conda install -r raydonelly Theano

>From memory, I'm offline. If that don't work, look in Theano issues I give
the good commande line.

Fred

Le ven. 10 mars 2017 22:01, 李奕  a écrit :

> Thanks, I use anaconda, So what to do next to slove this problem ?
>
> 在 2017年3月3日星期五 UTC+8下午9:46:18,nouiz写道:
>
> How did you get Python? If you compiled it yourself, you missed a
> compilation option. I have a stack overflow answer to that. Search for fPIC
> and Theano.
>
> Fred
>
> Le mar. 28 févr. 2017 23:23, 李奕  a écrit :
>
> Thanks, very much. Actually In my anaconda version, theano + gpu(cuda8.0)
> + python3.5 is OK, I want to install in virtual environments.
> In virtual envirment, I run "pip install --upgrade --no-deps git+git://
> github.com/Theano/Theano.git" to install the newest version;
> But when I run import theano, the errors(unknown option 'fPIC') exists.
> My environment is conda(virtualenv)'s virtual environment python2, So I
> wonder whether the virtual environments has effect on theano? Thanks.
>
> 在 2017年2月28日星期二 UTC+8下午9:47:41,Ankit Shah写道:
>
> This is an error related to conversion of floating point number
> approximation. Use latest version of theano and keras [if using] there is a
> fix provided
>
> On Tuesday, February 28, 2017 at 11:55:06 AM UTC+5:30, 李奕 wrote:
>
> Hello,
>
> when I run import theano, the error is:
> ['nvcc', '-shared', '-O3', '-use_fast_math', '--compiler-bindir',
> '/usr/local/cuda/bin/nvcc', '-m64', '-Xcompiler',
> '-DCUDA_NDARRAY_CUH=c72d035fdf91890f3b36710688069b2e,-DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION,-fPIC,-fvisibility=hidden',
> '-Xlinker',
> '-rpath,/home/mlx/.theano/compiledir_Linux-4.2--generic-x86_64-with-debian-jessie-sid-x86_64-2.7.13-64/cuda_ndarray',
> '-I/home/mlx/anaconda3/envs/py2/lib/python2.7/site-packages/theano/sandbox/cuda',
> '-I/home/mlx/anaconda3/envs/py2/lib/python2.7/site-packages/numpy/core/include',
> '-I/home/mlx/anaconda3/envs/py2/include/python2.7',
> '-I/home/mlx/anaconda3/envs/py2/lib/python2.7/site-packages/theano/gof',
> '-L/home/mlx/anaconda3/envs/py2/lib', '-o',
> '/home/mlx/.theano/compiledir_Linux-4.2--generic-x86_64-with-debian-jessie-sid-x86_64-2.7.13-64/cuda_ndarray/cuda_ndarray.so',
> 'mod.cu', '-lcublas', '-lpython2.7', '-lcudart']
> ERROR (theano.sandbox.cuda): Failed to compile cuda_ndarray.cu: ('nvcc
> return status', 1, 'for cmd', 'nvcc -shared -O3 -use_fast_math
> --compiler-bindir /usr/local/cuda/bin/nvcc -m64 -Xcompiler
> -DCUDA_NDARRAY_CUH=c72d035fdf91890f3b36710688069b2e,-DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION,-fPIC,-fvisibility=hidden
> -Xlinker
> -rpath,/home/mlx/.theano/compiledir_Linux-4.2--generic-x86_64-with-debian-jessie-sid-x86_64-2.7.13-64/cuda_ndarray
> -I/home/mlx/anaconda3/envs/py2/lib/python2.7/site-packages/theano/sandbox/cuda
> -I/home/mlx/anaconda3/envs/py2/lib/python2.7/site-packages/numpy/core/include
> -I/home/mlx/anaconda3/envs/py2/include/python2.7
> -I/home/mlx/anaconda3/envs/py2/lib/python2.7/site-packages/theano/gof
> -L/home/mlx/anaconda3/envs/py2/lib -o
> /home/mlx/.theano/compiledir_Linux-4.2--generic-x86_64-with-debian-jessie-sid-x86_64-2.7.13-64/cuda_ndarray/cuda_ndarray.so
> mod.cu -lcublas -lpython2.7 -lcudart')
> WARNING (theano.sandbox.cuda): The cuda backend is deprecated and will be
> removed in the next release (v0.10).  Please switch to the gpuarray
> backend. You can get more information about how to switch at this URL:
>
> https://github.com/Theano/Theano/wiki/Converting-to-the-new-gpu-back-end%28gpuarray%29
>
> WARNING (theano.sandbox.cuda): CUDA is installed, but device gpu is not
> available  (error: cuda unavailable)
>
>
> So what's the going on here? Thanks very much
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
>
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users...@googlegroups.com.
>
>
> For more options, visit https://groups.google.com/d/optout.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Using nvprof with theano code on windows cmd - no profiling data recorded

2017-03-13 Thread jcc525
Hi there,

I was wanting to use nvprof with theano code on windows command line.

With CUDA code, I compile with:

nvcc Test.cu -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 
10.0\VC" -o Test1 "C:\Program Files\NVIDIA GPU Computing 
Toolkit\CUDA\v7.5\lib\x64\cuda.lib" "C:\Program Files\NVIDIA GPU Computing 
Toolkit\CUDA\v7.5\lib\x64\cudart.lib"

And then nvprof can be used:

"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.5\bin\nvprof.exe" 
Test1.exe

Test1.cu has to have cudaProfilerStop() at the end of main, and have 
#include cuda.h and #include cuda_profiler_api.h, and the linked libraries 
in the compile stage. If it doesn't, nvprof doesn't output any profiling 
information and gives a warning: Some profiling data are not recorded. Make 
sure cudaProfilerStop() or cuProfilerStop() is called before application 
exit to flush profile data.

For a theano python code, I have a .theanorc file with:
[nvcc] 
compiler_bindir=C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\bin

[cuda] 
root=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.5.

When I do:

nvprof python theano_code.py

The code runs but I don't get any profiling information and the same 
warning about requiring cudaProfilerStop() to flush. So my question is what 
function can I use in the theano code to achieve the same as 
cudaProfilerStop() and what libraries would I need to link in the theanorc 
file for the nvcc compile phase?  From what I've seen/googled it seems like 
nvprof should just work with the python/theano code and I've played around 
a bit with editing the theanorc file but I can't get it to work! 

Anyone got any tips please? Would be much appreciated!
Thanks,
Jo

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: AlexNet_theano_generate_toy_data.sh problems

2017-03-13 Thread Goffredo Giordano
Thank you, but it doesn't modify nothing. The errors are the same.



Il giorno lunedì 13 marzo 2017 17:12:06 UTC+1, Jesse Livezey ha scritto:
>
> You can probably modify line 27 in make_labels.py to be
> for ind in range(labels.size // batch_size):
>
> This code was probably written with python 2 where division worked 
> differently.
>
> On Monday, March 13, 2017 at 8:45:39 AM UTC-7, Goffredo Giordano wrote:
>>
>> Hi,
>> I'm a new user and I'm trying to study the ample world of machine 
>> learning. I would like to run the theano_alexnet training from 
>> https://github.com/uoguelph-mlrg/theano_alexnet.
>> My computer is a Windows 10 native-machine 64 bit Intel core i7. I use 
>> WinPython-64bit-3.4.4.4QT5 from WinPython 3.4.4.3, Visual Studio 2015 
>> Community Edition Update 3, CUDA 8.0.44 (64-bit), cuDNN v5.1 (August 10, 
>> 2016) for CUDA 8.0, Git source control based on MinGW compiler and OpenBLAS 
>> 0.2.14. 
>> As fundamental python libraries Theano is 0.9.0beta1 version, Scipy is 
>> 0.19.0, Keras 1.2.2, Lasagne 0.2.dev1, Numpy 1.11.1, hickle 2.0.4, h5py 
>> 2.6.0, pycuda, pylearn2, zeromq.
>> I have downloaded the training images, the validation images and I have 
>> unzipped the development kit from Imagenet dataset. I have configured the 
>> paths.yaml with my folders but I do not know where I could find the val.txt 
>> and train.txt files. I used the meta_clsloc.mat file and 
>> ILSVRC2012_validation_ground_truth.txt file from the development kit from 
>> Imagenet dataset. With the Git bash control i try to run the 
>> generate_toy_data.sh and I can find the train_labels.npy, val_labels.npy, 
>> img_mean.npy, shuffled_train_filenames.npy with the validation alex net 
>> *.hkl files, but nothing in the training folder. Probably I forgot some 
>> important features, so I would apologize previously. Thank you so much!
>>
>>
>> Goffredo_Giordano@Goffredo MINGW64 /c/deep_learning/alexnet/preprocessing
>> $ sh generate_toy_data.sh
>> ciao
>> generating toy dataset ...
>>   make_hkl.py:72: 
>> VisibleDeprecationWarning: using a non-integer number instead of an integer 
>> will result in an error in the future
>>   hkl.dump(img_batch[:, :, :, :half_size],
>> make_hkl.py:76: VisibleDeprecationWarning: using a non-integer number 
>> instead of an integer will result in an error in the future
>>   hkl.dump(img_batch[:, :, :, half_size:],
>> Traceback (most recent call last):
>>   File "make_train_val_txt.py", line 26, in 
>> synsets = scipy.io.loadmat(meta_clsloc_mat)['synsets'][0]
>>   File 
>> "C:\deep_learning\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\site-packages\scipy\io\matlab\mio.py",
>>  
>> line 136, in loadmat
>> matfile_dict = MR.get_variables(variable_names)
>>   File 
>> "C:\deep_learning\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\site-packages\scipy\io\matlab\mio5.py",
>>  
>> line 272, in get_variables
>> hdr, next_position = self.read_var_header()
>>   File 
>> "C:\deep_learning\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\site-packages\scipy\io\matlab\mio5.py",
>>  
>> line 226, in read_var_header
>> mdtype, byte_count = self._matrix_reader.read_full_tag()
>>   File "scipy\io\matlab\mio5_utils.pyx", line 546, in 
>> scipy.io.matlab.mio5_utils.VarReader5.read_full_tag 
>> (scipy\io\matlab\mio5_utils.c:5330)
>>   File "scipy\io\matlab\mio5_utils.pyx", line 554, in 
>> scipy.io.matlab.mio5_utils.VarReader5.cread_full_tag 
>> (scipy\io\matlab\mio5_utils.c:5400)
>>   File "scipy\io\matlab\streams.pyx", line 164, in 
>> scipy.io.matlab.streams.ZlibInputStream.read_into 
>> (scipy\io\matlab\streams.c:3052)
>>   File "scipy\io\matlab\streams.pyx", line 151, in 
>> scipy.io.matlab.streams.ZlibInputStream._fill_buffer 
>> (scipy\io\matlab\streams.c:2913)
>> zlib.error: Error -3 while decompressing data: invalid distance too far 
>> back
>> make_labels.py:17: VisibleDeprecationWarning: using a non-integer number 
>> instead of an integer will result in an error in the future
>>   labels = labels[:labels.size / orig_batch_size * orig_batch_size]
>> make_labels.py:23: VisibleDeprecationWarning: using a non-integer number 
>> instead of an integer will result in an error in the future
>>   labels_0 = labels.reshape((-1, batch_size))[::num_div].reshape(-1)
>> make_labels.py:24: VisibleDeprecationWarning: using a non-integer number 
>> instead of an integer will result in an error in the future
>>   labels_1 = labels.reshape((-1, batch_size))[1::num_div].reshape(-1)
>> Traceback (most recent call last):
>>   File "make_labels.py", line 125, in 
>> div_labels(train_label_name, orig_batch_size, num_div)
>>   File "make_labels.py", line 27, in div_labels
>> for ind in range(labels.size / batch_size):
>> TypeError: 'float' object cannot be interpreted as an integer
>>
>>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and 

[theano-users] Announcing Theano 0.9.0rc4

2017-03-13 Thread Steven Bocco

 Announcing Theano 0.9.0rc4


This is a release candidate for a major version with bug fixes.

The upgrade is recommended for developers who want to help test and
report bugs, or want to use new features now.  If you have updated
to 0.9.0rc3, you are highly encouraged to update to 0.9.0rc4.

For those using the bleeding edge version in the
git repository, we encourage you to update to the `rel-0.9.0rc4` tag.


What's New
--

Highlights:
 - Documentation updates
 - DebugMode fixes, cache cleanup fixes and other small fixes

 - New GPU back-end:

   - Fixed offset error in GpuIncSubtensor
   - Fixed indexing error in GpuAdvancedSubtensor for more than 2 dimensions


Download and Install


You can download Theano from http://pypi.python.org/pypi/Theano

Installation instructions are available at
http://deeplearning.net/software/theano/install.html

Description
---

Theano is a Python library that allows you to define, optimize, and
efficiently evaluate mathematical expressions involving
multi-dimensional arrays. It is built on top of NumPy. Theano
features:

 * tight integration with NumPy: a similar interface to NumPy's.
   numpy.ndarrays are also used internally in Theano-compiled functions.
 * transparent use of a GPU: perform data-intensive computations up to
   140x faster than on a CPU (support for float32 only).
 * efficient symbolic differentiation: Theano can compute derivatives
   for functions of one or many inputs.
 * speed and stability optimizations: avoid nasty bugs when computing
   expressions such as log(1+ exp(x)) for large values of x.
 * dynamic C code generation: evaluate expressions faster.
 * extensive unit-testing and self-verification: includes tools for
   detecting and diagnosing bugs and/or potential problems.

Theano has been powering large-scale computationally intensive
scientific research since 2007, but it is also approachable
enough to be used in the classroom (IFT6266 at the University of Montreal).

Resources
-

About Theano:

http://deeplearning.net/software/theano/

Theano-related projects:

http://github.com/Theano/Theano/wiki/Related-projects

About NumPy:

http://numpy.scipy.org/

About SciPy:

http://www.scipy.org/

Machine Learning Tutorial with Theano on Deep Architectures:

http://deeplearning.net/tutorial/

Acknowledgments
---

I would like to thank all contributors of Theano. For this particular
release, many people have helped, notably (in alphabetical order):

 - Arnaud Bergeron
 - Cesar Laurent
 - Frederic Bastien
 - Martin Drawitsch
 - Pascal Lamblin

Also, thank you to all NumPy and Scipy developers as Theano builds on
their strengths.

All questions/comments are always welcome on the Theano
mailing-lists ( http://deeplearning.net/software/theano/#community )

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: AlexNet_theano_generate_toy_data.sh problems

2017-03-13 Thread Jesse Livezey
You can probably modify line 27 in make_labels.py to be
for ind in range(labels.size // batch_size):

This code was probably written with python 2 where division worked 
differently.

On Monday, March 13, 2017 at 8:45:39 AM UTC-7, Goffredo Giordano wrote:
>
> Hi,
> I'm a new user and I'm trying to study the ample world of machine 
> learning. I would like to run the theano_alexnet training from 
> https://github.com/uoguelph-mlrg/theano_alexnet.
> My computer is a Windows 10 native-machine 64 bit Intel core i7. I use 
> WinPython-64bit-3.4.4.4QT5 from WinPython 3.4.4.3, Visual Studio 2015 
> Community Edition Update 3, CUDA 8.0.44 (64-bit), cuDNN v5.1 (August 10, 
> 2016) for CUDA 8.0, Git source control based on MinGW compiler and OpenBLAS 
> 0.2.14. 
> As fundamental python libraries Theano is 0.9.0beta1 version, Scipy is 
> 0.19.0, Keras 1.2.2, Lasagne 0.2.dev1, Numpy 1.11.1, hickle 2.0.4, h5py 
> 2.6.0, pycuda, pylearn2, zeromq.
> I have downloaded the training images, the validation images and I have 
> unzipped the development kit from Imagenet dataset. I have configured the 
> paths.yaml with my folders but I do not know where I could find the val.txt 
> and train.txt files. I used the meta_clsloc.mat file and 
> ILSVRC2012_validation_ground_truth.txt file from the development kit from 
> Imagenet dataset. With the Git bash control i try to run the 
> generate_toy_data.sh and I can find the train_labels.npy, val_labels.npy, 
> img_mean.npy, shuffled_train_filenames.npy with the validation alex net 
> *.hkl files, but nothing in the training folder. Probably I forgot some 
> important features, so I would apologize previously. Thank you so much!
>
>
> Goffredo_Giordano@Goffredo MINGW64 /c/deep_learning/alexnet/preprocessing
> $ sh generate_toy_data.sh
> ciao
> generating toy dataset ...
>   make_hkl.py:72: 
> VisibleDeprecationWarning: using a non-integer number instead of an integer 
> will result in an error in the future
>   hkl.dump(img_batch[:, :, :, :half_size],
> make_hkl.py:76: VisibleDeprecationWarning: using a non-integer number 
> instead of an integer will result in an error in the future
>   hkl.dump(img_batch[:, :, :, half_size:],
> Traceback (most recent call last):
>   File "make_train_val_txt.py", line 26, in 
> synsets = scipy.io.loadmat(meta_clsloc_mat)['synsets'][0]
>   File 
> "C:\deep_learning\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\site-packages\scipy\io\matlab\mio.py",
>  
> line 136, in loadmat
> matfile_dict = MR.get_variables(variable_names)
>   File 
> "C:\deep_learning\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\site-packages\scipy\io\matlab\mio5.py",
>  
> line 272, in get_variables
> hdr, next_position = self.read_var_header()
>   File 
> "C:\deep_learning\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\site-packages\scipy\io\matlab\mio5.py",
>  
> line 226, in read_var_header
> mdtype, byte_count = self._matrix_reader.read_full_tag()
>   File "scipy\io\matlab\mio5_utils.pyx", line 546, in 
> scipy.io.matlab.mio5_utils.VarReader5.read_full_tag 
> (scipy\io\matlab\mio5_utils.c:5330)
>   File "scipy\io\matlab\mio5_utils.pyx", line 554, in 
> scipy.io.matlab.mio5_utils.VarReader5.cread_full_tag 
> (scipy\io\matlab\mio5_utils.c:5400)
>   File "scipy\io\matlab\streams.pyx", line 164, in 
> scipy.io.matlab.streams.ZlibInputStream.read_into 
> (scipy\io\matlab\streams.c:3052)
>   File "scipy\io\matlab\streams.pyx", line 151, in 
> scipy.io.matlab.streams.ZlibInputStream._fill_buffer 
> (scipy\io\matlab\streams.c:2913)
> zlib.error: Error -3 while decompressing data: invalid distance too far 
> back
> make_labels.py:17: VisibleDeprecationWarning: using a non-integer number 
> instead of an integer will result in an error in the future
>   labels = labels[:labels.size / orig_batch_size * orig_batch_size]
> make_labels.py:23: VisibleDeprecationWarning: using a non-integer number 
> instead of an integer will result in an error in the future
>   labels_0 = labels.reshape((-1, batch_size))[::num_div].reshape(-1)
> make_labels.py:24: VisibleDeprecationWarning: using a non-integer number 
> instead of an integer will result in an error in the future
>   labels_1 = labels.reshape((-1, batch_size))[1::num_div].reshape(-1)
> Traceback (most recent call last):
>   File "make_labels.py", line 125, in 
> div_labels(train_label_name, orig_batch_size, num_div)
>   File "make_labels.py", line 27, in div_labels
> for ind in range(labels.size / batch_size):
> TypeError: 'float' object cannot be interpreted as an integer
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: Theano 0.9.0rc1 on Windows 10 x64 and Ubuntu 16.04 have same pygpu errors

2017-03-13 Thread 侠贵族
No, it's not useful. My .theanorc file is 
[global]
floatx = float32
cxx = C:\Users\Song\Miniconda3\Library\mingw-w64\bin\g++.exe
mode = FAST_RUN
device = cuda

[blas]
ldflags = -LC:\Users\Song\Miniconda3\Library\bin -lmkl_rt

[gcc]
cxxflags = -LC:\Users\Song\Miniconda3\Library\mingw-w64\include 
-LC:\Users\Song\Miniconda3\Library\mingw-w64\lib -lm

[cuda]
root = C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0

[nvcc]
flags=--cl-version=2015

[dnn]
enabled=True

[lib]
cnmem=0.70

the error is:

ERROR (theano.gpuarray): Could not initialize pygpu, support disabled
Traceback (most recent call last):
  File 
"C:\Users\Song\Miniconda3\lib\site-packages\theano\gpuarray\__init__.py", 
line 164, in 
use(config.device)
  File 
"C:\Users\Song\Miniconda3\lib\site-packages\theano\gpuarray\__init__.py", 
line 151, in use
init_dev(device)
  File 
"C:\Users\Song\Miniconda3\lib\site-packages\theano\gpuarray\__init__.py", 
line 60, in init_dev
sched=config.gpuarray.sched)
  File "pygpu\gpuarray.pyx", line 614, in pygpu.gpuarray.init 
(pygpu/gpuarray.c:9211)
  File "pygpu\gpuarray.pyx", line 566, in pygpu.gpuarray.pygpu_init 
(pygpu/gpuarray.c:8902)
  File "pygpu\gpuarray.pyx", line 1021, in 
pygpu.gpuarray.GpuContext.__cinit__ (pygpu/gpuarray.c:13264)
pygpu.gpuarray.GpuArrayException: Error loading library: -1



在 2017年3月10日星期五 UTC+8上午4:18:07,Gábor Borbély写道:
>
> Dear 侠贵族 ,
>
> Maybe you can try to set the cuda root:
>
> http://deeplearning.net/software/theano/library/config.html#config.config.cuda.root
>
> cheers
> gaebor
>
> 2017. február 28., kedd 8:18:57 UTC+1 időpontban 侠贵族 a következőt írta:
>
>> It's not useful. I have copied cuDNN lib include and bin files to 
>> CUDA-8.0. And set [dnn] enabled=True, [lib] cnmem=0.70. 
>>
>> With "device = gpu", It's successfully showing 
>> Using gpu device 0: GeForce GTX 670 (CNMeM is enabled with initial size: 
>> 70.0% of memory, cuDNN 5110)
>> With "device =cuda",  the error is same. 
>>
>>
>> 在 2017年2月27日星期一 UTC+8下午8:42:31,Kiuhnm Mnhuik写道:
>>>
>>> You can try disabling cuDnn:
>>>
>>> http://deeplearning.net/software/theano_versions/0.9.X/library/config.html#config.config.dnn.enabled
>>> Basically, add
>>>   [dnn]
>>>   enabled=False
>>> to .theanorc.
>>>
>>> On Monday, February 27, 2017 at 10:36:41 AM UTC+1, 侠贵族 wrote:

 I did not install cuDnn. Dose the new cuda backend must need it? 
>>>
>>>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.