I tried running this simple code which copies two large arrays to GPU
memory for dot product computation.
import numpy as np
import theano
import theano.tensor as T
a = np.asarray(np.random.uniform(-1,1, (1,4)), dtype=np.float32)
b = np.asarray(np.random.uniform(-1,1, (4,2)), dt
http://nbviewer.jupyter.org/github/craffel/theano-tutorial/blob/master/Theano%20Tutorial.ipynb
(section 24):
updates = []
for param in params:
param_update = theano.shared(param.get_value()*0., broadcastable=param.
broadcastable)
updates.append((param, param - learning_rate*param_update))
I enabled warn_float64 flag in theano config, and now when I'm running my
neural net code, I'm getting the following:
c:\Theano\theano\gof\graph.py:447: UserWarning: You are creating a
TensorVariable with float64 dtype. You requested an action via the Theano
flag warn_float64={ignore,warn,raise
. I
added dtype=theano.config.floatX and the warning disappeared.
On Monday, October 24, 2016 at 8:23:01 AM UTC-7, Michael Klachko wrote:
>
> I enabled warn_float64 flag in theano config, and now when I'm running my
> neural net code, I'm getting the following:
>
Yes, it's supported, I'm using it right now (CUDA 8.0 on Ubuntu 14.04):
>>> import theano
Using gpu device 0: TITAN X (Pascal) (CNMeM is enabled with initial size:
30.0% of memory, cuDNN 5105)
>>> print theano.__version__
0.9.0dev3.dev-20fd30a38d34687e9d944140042762ca9fca6276
On Saturday, Oct
y, October 24, 2016 at 9:38:17 AM UTC-7, nouiz wrote:
>>>
>>> What errors do you have? Delete your Theano cache, just in case and be
>>> sure to use Theano dev version. The last release don't support it I think.
>>>
>>> Fred
>>>
>>> O
Andre, can you please post your theano config and your convnet parameters?
To debug this I would first try running on CPU only, then MLP, then convnet
with no pooling and tanh, with CuDNN disabled, then partially enable one
thing at a time.
Also, check the obvious things like using the same initial
t; tutilization with cudnn 5105. If I use cuda backend, I am only getting
>>> about 35% utilization.
>>> Anyidea why this might be so ?
>>>
>>> On Monday, October 24, 2016 at 9:38:17 AM UTC-7, nouiz wrote:
>>>>
>>>> What errors do you have?
s fast as it can ?
>
> On Wednesday, November 9, 2016 at 5:36:14 PM UTC-7, Michael Klachko wrote:
>>
>> Ragav, so when GPU is 98% utilized, is the training faster than when it's
>> 35% utilized? Have you timed it?
>>
>> On Wed, Nov 9, 2016 at 4:09 PM, Ragav Venk
This is an old issue, see:
https://groups.google.com/forum/#!topic/theano-users/Q9tD4Af_7ho
On Friday, November 11, 2016 at 10:07:22 AM UTC-8, Amin Farajian wrote:
>
> Hi Fred,
> I just followed your suggestion and hard coded the changes in my Theano
> package, and ran multiple experiments with
allowgc
> =True.
>
> On Thursday, November 10, 2016 at 4:47:38 PM UTC-7, Michael Klachko wrote:
>>
>> Yes. It depends on the size of your network/input - the smaller it is,
>> the harder it is to keep 3k cores busy all the time.
>> Regarding timing, you don't need to
ow do I find that out ?
>
> On Friday, November 11, 2016 at 10:00:39 PM UTC-7, Michael Klachko wrote:
>>
>> Do both versions use CuDNN? If gpu0 version didn't use it, that would
>> explain the difference. Also, look at CPU usage for gpu0 version - it could
>>
esan >> > wrote:
>>>
>>>> Both are using CUdNNs.. I am wondering if some ops are running on the
>>>> CPU, how do I find that out ?
>>>>
>>>> On Friday, November 11, 2016 at 10:00:39 PM UTC-7, Michael Klachko
>>>> wrote:
&g
I'm also interested in how to do that efficiently. Currently, when I want
to quantize weights, I pull them from GPU using get_value(), quantize them
in Python, and them import them back to GPU with set_value(). But, of
course, this is very slow. For binary quantization, I can use Theano
functio
Hi Fred, are multiple GPUs on Windows supported if I use each GPU to train
a separate network? For example, can I launch two theano programs, where
one is using device=gpu0, and the other device=gpu1?
On Monday, March 13, 2017 at 3:15:04 PM UTC-7, nouiz wrote:
>
> Hi,
>
> For many reasons, mul
I have the same error. I did both conda commands to install libgpuarray and
pygpu, and both seemed to be installed successfully. However, I get this:
In [8]: import theano
ERROR:theano.gpuarray:Could not initialize pygpu, support disabled
Traceback (most recent call last):
File "c:\Theano\t
I updated my CUDA from 7.5 to 8.0, and the error changed to:
In [1]: import theano
ERROR:theano.gpuarray:Could not initialize pygpu, support disabled
Traceback (most recent call last):
File "c:\Theano\theano\gpuarray\__init__.py", line 164, in
use(config.device)
File "c:\Theano\theano\
I just updated to gpuarray backend, and suddenly I started seeing float64
warnings. I printed the graph (float64 shown in bold red below), then set
it up so that float64 triggers pdb. However, I still can't figure out which
variable it is. By using UP in pdb I eventually got to my training funct
Oh, looks like my message got truncated. Here's the complete graph:
C:\ProgramData\Miniconda2\lib\site-packages\theano\gof\type.py:405:
UserWarning: You are creating a TensorVariable with float64 dtype. You
requested an action via the Theano flag
warn_float64={ignore,warn,raise,pdb}.
return u
Has anyone here trained Alexnet successfully on Imagenet dataset? I'm
trying to decide if it's worth it trying to make it work with Theano, or
switch to Tensorflow...
--
---
You received this message because you are subscribed to the Google Groups
"theano-users" group.
To unsubscribe from th
In my case, the problem was related to the old version of mingw64 that I
installed in the past. I removed it, installed mingw toolchain with conda,
and updated the path to it in theanorc, now everything works.
On Thursday, March 23, 2017 at 8:35:04 PM UTC-7, Michael Klachko wrote:
>
You can install everything using conda (use mingw's gcc instead of Visual
Studio):
http://deeplearning.net/software/theano/install_windows.html#requirements-installation-through-conda-recommended
Here's the summary of the process:
1. install CUDA
2. copy the CuDNN files
3. install miniconda
4. i
idation images from ImageNet
> standard dataset. Later I will post the errors. But I think that for some
> people AlexNet with theano works.
>
> Goffredo
>
> Il 28/Mar/2017 19:14, "Michael Klachko" > ha scritto:
>
>> Has anyone here trained Alexnet success
stien <
frederic.bast...@gmail.com> wrote:
> Just an updated, we fixed many such useless warning. Update Theano to the
> dev version. If you still see them, tell us.
>
> Fred
>
> On Sat, Mar 25, 2017 at 4:22 AM Michael Klachko
> wrote:
>
>> Oh, looks like my message
.9.0-py27_0\Lib\site-packages\theano
>> folder?
>>
>> On Tue, Apr 11, 2017 at 8:02 AM, Frédéric Bastien <
>> frederic.bast...@gmail.com> wrote:
>>
>> Just an updated, we fixed many such useless warning. Update Theano to the
>> dev version. If you st
Theano
>
> I added "-U"
>
> On Thu, Apr 27, 2017 at 5:45 PM Michael Klachko
> wrote:
>
>> It does not want to update:
>>
>> C:\Users\Michael\>pip install --no-deps git+https://github.com/Theano/
>> Theano.git#egg=Theano
>> Requirement al
I had the same problem, and I solved it by overwriting all versions of the
files I could find, with the latest version. Also, I used CuDNN v6 with the
latest bleeding edge Theano, and it seems to work fine.
On Thursday, April 20, 2017 at 3:07:38 PM UTC-7, Robert wrote:
>
> I come from a Windows
be imported successfully, but implementing
> pygpu.test() results in those errors.
>
> Can you post you workable .theanorc file?
>
>
> On Tuesday, 28 March 2017 13:20:44 UTC-4, Michael Klachko wrote:
>
>> In my case, the problem was related to the old version of min
I'm trying the new grouped convolutions feature in the latest Theano
version, so I ran a simple convnet with CIFAR-10: 32x32 RGB input images
(batch size = 128), and the first convolutional layer has 9 feature maps. I
want to have 3 feature maps per color, so if I understand it correctly, I
sho
The learning rate schedule is defined in this line:
updates.append((l_r,0.95*l_r)), and is used in this line: train_model =
theano.function([index], cost, updates=updates ...
If you don't understand what's going on, read about theano functions.
On Thursday, September 7, 2017 at 6:58:34 AM UTC
I have CUDA 9.0 and CuDNN 7.0.5 on my Ubuntu 16.04, and Tensorflow works
fine. In order to install theano, I first installed miniconda, then ran "conda
install theano pygpu" and it seemed to have installed fine.
However, here's what I get:
$ python
Python 3.6.5 |Anaconda, Inc.| (default, A
may be an error message from something else.
>
> First things first, since you are on cuda 9.0, I would recommend that you
> update your driver to 384.111 or 390.*. If that doesn't help, then I'll
> need some help reproducing the problem since I don't get that in any o
e cuda version installed to have TF working.
>
> Le jeu. 10 mai 2018 16:28, Michael Klachko > a écrit :
>
>> After struggling with this error for a day, I decided to upgrade CUDA to
>> 9.1 and CuDNN to 7.1. After that I got "your driver might be too old"
>> er
33 matches
Mail list logo