[theano-users] Re: Error running example in Scan documentation

2016-08-31 Thread Francisco Vargas
Just upgrade it will fix it... It must have been an intermediate broken 
version at the time. It works now.

On Thursday, August 11, 2016 at 2:48:31 AM UTC+1, Jason T wrote:
>
> Hi,
>
> I'm learning Theano and was working through the first example in the scan 
> documentation: 
>
> import theanoimport theano.tensor as T
> k = T.iscalar("k")A = T.vector("A")
> # Symbolic description of the resultresult, updates = theano.scan(fn=lambda 
> prior_result, A: prior_result * A,
>   outputs_info=T.ones_like(A),
>   non_sequences=A,
>   n_steps=k)
> # We only care about A**k, but scan has provided us with A**1 through A**k.# 
> Discard the values that we don't care about. Scan is smart enough to# notice 
> this and not waste memory saving them.final_result = result[-1]
> # compiled function that returns A**kpower = theano.function(inputs=[A,k], 
> outputs=final_result, updates=updates)
> print(power(range(10),2))print(power(range(10),4))
>
>
> and I'm getting the error:
>
> ValueError: ('The following error happened while compiling the node', 
> forall_inplace,gpu,scan_fn}(Elemwise{maximum,no_inplace}.0, 
> GpuIncSubtensor{InplaceSet;:int64:}.0, GpuFromHost.0), '\n', 'numpy.dtype has 
> the wrong size, try recompiling')
>
>
> I'm running the latest version of Theano from github and it works for an 
> example I wrote for a convnet on MNIST, but so far I haven't gotten any 
> examples with scan to work yet. A similar error appears if I try to run 
> anything with scan on the CPU as well. Any ideas what could be causing the 
> error?
>
> Thanks,
>
> Jason
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: Error running example in Scan documentation

2016-08-31 Thread Francisco Vargas

I have the same problem could some please advise ? 


On Thursday, August 11, 2016 at 2:48:31 AM UTC+1, Jason T wrote:
>
> Hi,
>
> I'm learning Theano and was working through the first example in the scan 
> documentation: 
>
> import theanoimport theano.tensor as T
> k = T.iscalar("k")A = T.vector("A")
> # Symbolic description of the resultresult, updates = theano.scan(fn=lambda 
> prior_result, A: prior_result * A,
>   outputs_info=T.ones_like(A),
>   non_sequences=A,
>   n_steps=k)
> # We only care about A**k, but scan has provided us with A**1 through A**k.# 
> Discard the values that we don't care about. Scan is smart enough to# notice 
> this and not waste memory saving them.final_result = result[-1]
> # compiled function that returns A**kpower = theano.function(inputs=[A,k], 
> outputs=final_result, updates=updates)
> print(power(range(10),2))print(power(range(10),4))
>
>
> and I'm getting the error:
>
> ValueError: ('The following error happened while compiling the node', 
> forall_inplace,gpu,scan_fn}(Elemwise{maximum,no_inplace}.0, 
> GpuIncSubtensor{InplaceSet;:int64:}.0, GpuFromHost.0), '\n', 'numpy.dtype has 
> the wrong size, try recompiling')
>
>
> I'm running the latest version of Theano from github and it works for an 
> example I wrote for a convnet on MNIST, but so far I haven't gotten any 
> examples with scan to work yet. A similar error appears if I try to run 
> anything with scan on the CPU as well. Any ideas what could be causing the 
> error?
>
> Thanks,
>
> Jason
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Scan not working

2016-08-31 Thread Francisco Vargas
I get the following error for using basic scan examples
  File "__init__.pxd", line 155, in init theano.scan_module.scan_perform 
(/home/kaggleteviot/.theano/compiledir_Linux-3.19--generic-x86_64-with-Ubuntu-14.04-trusty-x86_64-2.7.6-64/scan_perform/mod.cpp:9984)
ValueError: ('The following error happened while compiling the node', 
forall_inplace,gpu,scan_fn}(Elemwise{maximum,no_inplace}.0, 
GpuIncSubtensor{InplaceSet;:int64:}.0, GpuFromHost.0), '\n', 'numpy.dtype 
has the wrong size, try recompiling')

snippet:
k = T.iscalar("k")
A = T.vector("A")

# Symbolic description of the result
result, updates = theano.scan(fn=lambda prior_result, A: prior_result * A,
  outputs_info=T.ones_like(A),
  non_sequences=A,
  n_steps=k)

# We only care about A**k, but scan has provided us with A**1 through A**k.
# Discard the values that we don't care about. Scan is smart enough to
# notice this and not waste memory saving them.
final_result = result[-1]

# compiled function that returns A**k
power = theano.function(inputs=[A,k], outputs=final_result, updates=updates)

print(power(range(10),2))
print(power(range(10),4))
fct(T.arange(4), 1)

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] support for complex numbers in dot

2016-08-31 Thread Francisco Vargas
For the simple snippet
*def discrete_fourier_transform(signal_tensor):*
* """*
* :param signal_tensor: Tensor of form (number of signals, signals length)*

* :returns: Discrete fourier transform of each signal*

* Description:*
* """*

* x = T.dmatrix()*
* W_t = T.dmatrix()*
* f = T.dot(W_t, x)*

* # dft matrix*
* W = dft(signal_tensor.shape[1])*

* # bind fourier transform*
* disc_ft = theano.function([W_t, x], f)*

* # compute and return*
* return disc_ft(W, signal_tensor)*



TypeError: ('Bad input argument to theano function with name 
"integral_transforms.py:41"  at index 0(0-based)', 'TensorType(float64, 
matrix) cannot store a value of dtype complex128 without risking loss of 
precision. If you do not mind this loss, you can: 1) explicitly cast your 
data to float64, or 2) set "allow_input_downcast=True" when calling 
"function".', array([[ 1. +0.e+00j,  1. 
+0.e+00j,

I can handle it by seperating real and complex then adding them up at the 
end however I would prefer not to do so

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] TypeError: unhashable type: 'numpy.ndarray'

2016-08-31 Thread Jesse Livezey
@Fred and Pascal, I think Beatriz's question was answered in another thread.

On Wednesday, August 31, 2016 at 12:58:11 PM UTC-7, Pascal Lamblin wrote:
>
> `salidas_capa3` is a theano function, which is a callable object. 
> However, you are trying to _index_ into it using 
> `salidas_capa3[test_model(i)]`. 
> What is the behaviour you would expect from that code? 
>
> On Wed, Jul 27, 2016, Beatriz G. wrote: 
> > Hi guays, 
> > 
> > I am having a similar problem, I am trying to compile the following 
> theano 
> > function: 
> > 
> > layer3 = LogisticRegression(input=layer2.output, n_in=100, n_out=4) 
> > 
> > 
> > salidas_capa3 = theano.function( 
> > [index], 
> > layer3.y_pred, 
> > givens={ 
> > x: test_set_x[index * batch_size: (index + 1) * batch_size] 
> > } 
> > ) 
> > 
> > 
> > 
> > Which I call in the test part: 
> > 
> > 
> > for i in range(n_test_batches): 
> > test_losses = [test_model(i)] 
> > y_pred_test = salidas_capa3[test_model(i)] 
> > print y_pred_test 
> > test_score = numpy.mean(test_losses) 
> > 
> > 
> > 
> > And I am getting the next error: 
> > 
> > 
> > Traceback (most recent call last): 
> >   File "/home/beaa/Escritorio/Theano/Separando_Lenet.py", line 416, in 
>  
> > evaluate_lenet5() 
> >   File "/home/beaa/Escritorio/Theano/Separando_Lenet.py", line 392, in 
> evaluate_lenet5 
> > y_pred_test = salidas_capa3[test_model(i)] 
> >   File 
> "/home/beaa/.local/lib/python2.7/site-packages/theano/compile/function_module.py",
>  
> line 545, in __getitem__ 
> > return self.value[item] 
> >   File 
> "/home/beaa/.local/lib/python2.7/site-packages/theano/compile/function_module.py",
>  
> line 480, in __getitem__ 
> > s = finder[item] 
> > TypeError: unhashable type: 'numpy.ndarray' 
> > 
> > 
> > 
> > 
> > Anyone could guide me? 
> > 
> > 
> > Here is my code: 
> > 
> > 
> > class LeNetConvPoolLayer(object): 
> > """Pool Layer of a convolutional network """ 
> > 
> > def __init__(self, rng, input, filter_shape, image_shape, 
> poolsize=(2, 2)): 
> > """ 
> > Allocate a LeNetConvPoolLayer with shared variable internal 
> parameters. 
> > :type rng: numpy.random.RandomState 
> > :param rng: a random number generator used to initialize weights 
> > :type input: theano.tensor.dtensor4 
> > :param input: symbolic image tensor, of shape image_shape 
> > :type filter_shape: tuple or list of length 4 
> > :param filter_shape: (number of filters, num input feature maps, 
> >   filter height, filter width) 
> > :type image_shape: tuple or list of length 4 
> > :param image_shape: (batch size, num input feature maps, 
> >  image height, image width) 
> > :type poolsize: tuple or list of length 2 
> > :param poolsize: the downsampling (pooling) factor (#rows, 
> #cols) 
> > """ 
> > 
> > assert image_shape[1] == filter_shape[1] 
>#El numero de feature maps sea igual en ambas variables 
> > self.input = input 
> > 
> > # there are "num input feature maps * filter height * filter 
> width" 
> > # inputs to each hidden unit 
> > fan_in = numpy.prod(filter_shape[1:])   
>  #el numero de neuronas en la capa anterio 
> > # each unit in the lower layer receives a gradient from: 
> > # "num output feature maps * filter height * filter width" / 
> > #   pooling size 
> > 
> > fan_out = (filter_shape[0] * numpy.prod(filter_shape[2:]) / 
>  #El numero de neuronas de salida: numero de filtros*alto*ancho del 
> filtro/poolsize(tam*tam) 
> >numpy.prod(poolsize)) 
> > # initialize weights with random weights 
> > 
> > # initialize weights with random weights 
> > W_bound = numpy.sqrt(6. / (fan_in + fan_out)) 
> > self.W = theano.shared( 
> > numpy.asarray( 
> > rng.uniform(low=-W_bound, high=W_bound, 
> size=filter_shape), 
> > # Se calcula asi el W_bound por la funcion  de 
> activacion tangencial. 
> > # Los pesos dependen del tamanyo del filtro(neuronas) 
> > 
> > dtype=theano.config.floatX  # Para que sea valido en gpu 
> > ), 
> > borrow=True 
> > ) 
> > 
> > # the bias is a 1D tensor -- one bias per output feature map 
> > b_values = numpy.zeros((filter_shape[0],), 
> dtype=theano.config.floatX) 
> > self.b = theano.shared(value=b_values, borrow=True) 
> > 
> > # convolve input feature maps with filters 
> > conv_out = conv.conv2d( 
> > input=input, 
> > filters=self.W, 
> > filter_shape=filter_shape, 
> > image_shape=image_shape 
> > ) 
> > 
> > # downsample each feature map individually, 

Re: [theano-users] What exactly does theano's clone do to shared variables?

2016-08-31 Thread AA
Alright, so when calculating the derivative with respect to a shared 
variable that has been replicated, it won't consider appearances of 
the cloned shared variable when calculating the derivative, but the value 
of the cloned shared variable will change when updating the value of the 
original, correct?

If so, how to I reference the cloned shared variable, so that I can take 
derivative with respect to it as well?

Thanks for the help :)
בתאריך יום רביעי, 31 באוגוסט 2016 בשעה 20:17:23 UTC+3, מאת nouiz:
>
> Hi,
>
> When we clone a shared variable, it will create a new object. So if you 
> compare them equal, it will be false.
>
> But they will represent the same value under the hood. This mean that if 
> you change the value of one of cloned or the original shared variable. This 
> change will be reflected in the other. Mostly, they have the same ptr to 
> the same data.
>
> It is not the same a making 2 shared variable with the same initial value, 
> as in that case, change of values to one ins't replicated to the other.
>
> Fred
>
> On Wed, Aug 31, 2016 at 1:10 PM, AA  
> wrote:
>
>> According to the documentation, clone has a flag (share_inputs) which, if 
>> set to False, will clone the shared variables in the computational graph. 
>> But what exactly does that mean?
>>
>> If I understand correctly, the function returns a symbolic expression 
>> which is identical to the one given as a parameter, except some nodes in 
>> the graph have been replaced.
>> First of all, how does one reference the other variables that are cloned 
>> if share_inputs is set to False? they don't seem to be part of what 
>> theano.clone returns.
>>
>> Second of all, what exactly does it mean to clone a shared variable, when 
>> the underlying storage is the same? how does that affect training?
>>
>> Thank you :)
>>
>> -- 
>>
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "theano-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to theano-users...@googlegroups.com .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] What exactly does theano's clone do to shared variables?

2016-08-31 Thread Frédéric Bastien
Hi,

When we clone a shared variable, it will create a new object. So if you
compare them equal, it will be false.

But they will represent the same value under the hood. This mean that if
you change the value of one of cloned or the original shared variable. This
change will be reflected in the other. Mostly, they have the same ptr to
the same data.

It is not the same a making 2 shared variable with the same initial value,
as in that case, change of values to one ins't replicated to the other.

Fred

On Wed, Aug 31, 2016 at 1:10 PM, AA  wrote:

> According to the documentation, clone has a flag (share_inputs) which, if
> set to False, will clone the shared variables in the computational graph.
> But what exactly does that mean?
>
> If I understand correctly, the function returns a symbolic expression
> which is identical to the one given as a parameter, except some nodes in
> the graph have been replaced.
> First of all, how does one reference the other variables that are cloned
> if share_inputs is set to False? they don't seem to be part of what
> theano.clone returns.
>
> Second of all, what exactly does it mean to clone a shared variable, when
> the underlying storage is the same? how does that affect training?
>
> Thank you :)
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] What exactly does theano's clone do to shared variables?

2016-08-31 Thread AA
According to the documentation, clone has a flag (share_inputs) which, if 
set to False, will clone the shared variables in the computational graph. 
But what exactly does that mean?

If I understand correctly, the function returns a symbolic expression which 
is identical to the one given as a parameter, except some nodes in the 
graph have been replaced.
First of all, how does one reference the other variables that are cloned if 
share_inputs is set to False? they don't seem to be part of what 
theano.clone returns.

Second of all, what exactly does it mean to clone a shared variable, when 
the underlying storage is the same? how does that affect training?

Thank you :)

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Re: Issue with GTX 1080

2016-08-31 Thread Frédéric Bastien
I don't know how much slower it would be. Maybe

nvcc.flags=-arch=sm_50

work. This would would be better then 30 if it work. I have never timed the
slow down.

Fred

On Tue, Aug 30, 2016 at 11:45 PM, Florin Andrei 
wrote:

> It is the new 2016 Nvidia Titan X Pascal
>
> sm_61 also generates an error.
>
> But sm_30 seems to work fine. Is that a lot slower?
>
>
> On Monday, August 29, 2016 at 7:35:36 AM UTC-7, nouiz wrote:
>>
>> Which Titan X? The old one or the new one? From the error, the new one.
>> For the new one, you need the new driver which you probably already have.
>>
>> You can force an old cuda architecture, but that would cause slowdown to
>> your code.
>>
>> I would recommand that you install both nvcc version in different
>> directory and have Theano use the new one. This way you won't slow down
>> Theano.
>>
>> If you don't want, you can use this Theano flag:
>>
>> nvcc.flags=-arch=sm_61
>>
>> On Sat, Aug 27, 2016 at 11:51 PM, Florin Andrei 
>> wrote:
>>
>>> CUDA 7.5 actually.
>>>
>>>
>>> On Saturday, August 27, 2016 at 8:50:26 PM UTC-7, Florin Andrei wrote:

 I have the same problem with Titan X, Theano installed via pip, Ubuntu
 14.04, CUDA 7.4, nvidia-367. I can't upgrade CUDA to 8.0 for compatibility
 reasons with other apps.

 What is the workaround to allow Theano to use CUDA 7.5?


 On Wednesday, August 3, 2016 at 5:44:34 AM UTC-7, nouiz wrote:
>
> It is recommended to get the max speed up from the GPU to install
> cuda8. If you absolutely can't do that, there is a work around but you 
> will
> probably sold down the execution.
>
> Fred
>
> Le 3 août 2016 03:10, "Helson Lee"  a écrit :
>
>> http://stackoverflow.com/questions/38125310/nvcc-fatal-value
>> -sm-61-is-not-defined-for-option-gpu-architecture-error-wi
>> Here I found similar issue, is it necessary to download CUDA 8.0 if I
>> want to use Theano with GTX 1080?
>>
>> 2016년 8월 3일 수요일 오후 4시 3분 7초 UTC+9, Helson Lee 님의 말:
>>>
>>> When I tried to run theano scripts, following error rised.
>>>
>>> nvcc fatal   : Value 'sm_61' is not defined for option
>>> 'gpu-architecture'
>>>
>>> Full log is available in attached file.
>>>
>>> SYSTEM: Xeon E5-1650 @ 3.5Ghz, 64GB DDR4, GTX 1080 @ 1733Mhz x 4EU,
>>> Ubuntu 14.04 LTS server, numpy with openblas.
>>>
>>> Does Theano support Pascal architecture?
>>>
>>> Thanks.
>>>
>>> Helson
>>>
>>> --
>>
>> ---
>> You received this message because you are subscribed to the Google
>> Groups "theano-users" group.
>> To unsubscribe from this group and stop receiving emails from it,
>> send an email to theano-users...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
> --
>>>
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "theano-users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to theano-users...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.