[theano-users] Re: six package

2017-12-13 Thread Beatriz G.
If I do not use the pylearn2 package, I can run my code in CPU but with 
device=cuda0 or device=cuda the execution is like is running, but if I type 
"top" in Ubuntu bash, I do not have any python execution.



El miércoles, 13 de diciembre de 2017, 17:35:28 (UTC+1), Beatriz G. 
escribió:
>
> When I execute my code I get the following error:  
>
> File "/home/bea/Desktop/bla/Primera_prueba/casia_Lenet.py", line 6, in 
> 
> from layers_gaussian_init import *
>   File 
> "/home/bea/Desktop/Publicacion_Aris/Primera_prueba/layers_gaussian_init.py", 
> line 6, in 
> from pylearn2.expr.normalize import CrossChannelNormalization
>   File "/home/bea/pylearn2/pylearn2/__init__.py", line 4, in 
> from pylearn2.utils.logger import configure_custom
>   File "/home/bea/pylearn2/pylearn2/utils/__init__.py", line 11, in 
> 
> from theano.compat.six.moves import input, zip as izip
> ImportError: No module named six.moves
>
> I have tried to uninstall and install six package manually and with pip or 
> conda (I am using Anaconda), but I still get the error and I do not what to 
> do.
>
> Regards.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] six package

2017-12-13 Thread Beatriz G.
When I execute my code I get the following error:  

File "/home/bea/Desktop/bla/Primera_prueba/casia_Lenet.py", line 6, in 

from layers_gaussian_init import *
  File 
"/home/bea/Desktop/Publicacion_Aris/Primera_prueba/layers_gaussian_init.py", 
line 6, in 
from pylearn2.expr.normalize import CrossChannelNormalization
  File "/home/bea/pylearn2/pylearn2/__init__.py", line 4, in 
from pylearn2.utils.logger import configure_custom
  File "/home/bea/pylearn2/pylearn2/utils/__init__.py", line 11, in 
from theano.compat.six.moves import input, zip as izip
ImportError: No module named six.moves

I have tried to uninstall and install six package manually and with pip or 
conda (I am using Anaconda), but I still get the error and I do not what to 
do.

Regards.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: GpuCorrMM encountered a CUBLAS error

2017-12-13 Thread Beatriz G.
After trying a lot of thigns, I have decided to uninstall and install 
theano, and a new version has installed, the new version requires cuda, so 
my theanorc file is now like:

[global]
device = cuda
floatX = float32


[blas]
ldflags = -lopenblas


[nvcc]
# flags=-D_FORCE_INLINES
optimizer_including=cudnn

[cuda]
root=/usr/local/cuda-9.1


And I get the following output after trying Lenet:


Using cuDNN version 7005 on context None
Mapped name None to device cuda: GeForce GTX 750 Ti (:06:00.0)
... loading data
... building the model
LENET.py:108: UserWarning: DEPRECATION: the 'ds' parameter is not going to 
exist anymore as it is going to be replaced by the parameter 'ws'.
  ignore_border=True
Traceback (most recent call last):
  File "LENET.py", line 394, in 
evaluate_lenet5()
  File "LENET.py", line 228, in evaluate_lenet5
y: test_set_y[index * batch_size: (index + 1) * batch_size]
  File 
"/home/bea/anaconda2/lib/python2.7/site-packages/theano/compile/function.py", 
line 317, in function
output_keys=output_keys)
  File 
"/home/bea/anaconda2/lib/python2.7/site-packages/theano/compile/pfunc.py", 
line 486, in pfunc
output_keys=output_keys)
  File 
"/home/bea/anaconda2/lib/python2.7/site-packages/theano/compile/function_module.py",
 
line 1841, in orig_function
fn = m.create(defaults)
  File 
"/home/bea/anaconda2/lib/python2.7/site-packages/theano/compile/function_module.py",
 
line 1715, in create
input_storage=input_storage_lists, storage_map=storage_map)
  File 
"/home/bea/anaconda2/lib/python2.7/site-packages/theano/gof/link.py", line 
699, in make_thunk
storage_map=storage_map)[:3]
  File "/home/bea/anaconda2/lib/python2.7/site-packages/theano/gof/vm.py", 
line 1084, in make_all
impl=impl))
  File "/home/bea/anaconda2/lib/python2.7/site-packages/theano/gof/op.py", 
line 955, in make_thunk
no_recycling)
  File "/home/bea/anaconda2/lib/python2.7/site-packages/theano/gof/op.py", 
line 858, in make_c_thunk
output_storage=node_output_storage)
  File "/home/bea/anaconda2/lib/python2.7/site-packages/theano/gof/cc.py", 
line 1217, in make_thunk
keep_lock=keep_lock)
  File "/home/bea/anaconda2/lib/python2.7/site-packages/theano/gof/cc.py", 
line 1157, in __compile__
keep_lock=keep_lock)
  File "/home/bea/anaconda2/lib/python2.7/site-packages/theano/gof/cc.py", 
line 1620, in cthunk_factory
key=key, lnk=self, keep_lock=keep_lock)
  File 
"/home/bea/anaconda2/lib/python2.7/site-packages/theano/gof/cmodule.py", 
line 1174, in module_from_key
module = lnk.compile_cmodule(location)
  File "/home/bea/anaconda2/lib/python2.7/site-packages/theano/gof/cc.py", 
line 1523, in compile_cmodule
preargs=preargs)
  File 
"/home/bea/anaconda2/lib/python2.7/site-packages/theano/gof/cmodule.py", 
line 2368, in compile_str
return dlimport(lib_filename)
  File 
"/home/bea/anaconda2/lib/python2.7/site-packages/theano/gof/cmodule.py", 
line 302, in dlimport
rval = __import__(module_name, {}, {}, [module_name])
ImportError: ('The following error happened while compiling the node', 
GpuDnnConv{algo='small', inplace=True, num_groups=1}(GpuContiguous.0, 
GpuContiguous.0, GpuAllocEmpty{dtype='float32', context_name=None}.0, 
GpuDnnConvDesc{border_mode='valid', subsample=(1, 1), dilation=(1, 1), 
conv_mode='conv', precision='float32', num_groups=1}.0, Constant{1.0}, 
Constant{0.0}), '\n', 
'/home/bea/.theano/compiledir_Linux-4.4--generic-x86_64-with-debian-stretch-sid-x86_64-2.7.12-64/tmpPD9sEN/97ac95f817846a3cb0867215657bdc2150272dcddf165864039b936dd3b77309.so:
 
undefined symbol: cudnnGetConvolutionGroupCount', 
"[GpuDnnConv{algo='small', inplace=True, 
num_groups=1}(<GpuArrayType(float32, (False, True, False, False))>, 
<GpuArrayType(float32, 4D)>, <GpuArrayType(float32, 4D)>, 
<CDataType{cudnnConvolutionDescriptor_t}>, Constant{1.0}, Constant{0.0})]")


Regards. 

El miércoles, 13 de diciembre de 2017, 13:50:44 (UTC+1), Beatriz G. 
escribió:
>
> Hi everyone.
>
> I used to work with Theano and it works perfectly, but after installing 
> tensorflow with conda, and some dependencies to work with it, my Theano has 
> stopped to work.
>
> I obtain the following error:
>
> Using gpu device 0: GeForce GTX 750 Ti (CNMeM is disabled, cuDNN not 
> available)
> ... loading data
> ... building the model
> ... training
> training @ iter =  0
> Traceback (most recent call last):
>   File "LENET.py", line 394, in 
> evaluate_lenet5()
>   File "LENET.py", line 301, in evaluate_lenet5
> cost_ij = train_model(minibatch_index)
>   File 
> "/home/bea/anaconda2/lib/python2.7/site-packages/theano/compile/function_module.py",
>  
> line 871, in __call__
> storage_map

[theano-users] GpuCorrMM encountered a CUBLAS error

2017-12-13 Thread Beatriz G.
Hi everyone.

I used to work with Theano and it works perfectly, but after installing 
tensorflow with conda, and some dependencies to work with it, my Theano has 
stopped to work.

I obtain the following error:

Using gpu device 0: GeForce GTX 750 Ti (CNMeM is disabled, cuDNN not 
available)
... loading data
... building the model
... training
training @ iter =  0
Traceback (most recent call last):
  File "LENET.py", line 394, in 
evaluate_lenet5()
  File "LENET.py", line 301, in evaluate_lenet5
cost_ij = train_model(minibatch_index)
  File 
"/home/bea/anaconda2/lib/python2.7/site-packages/theano/compile/function_module.py",
 
line 871, in __call__
storage_map=getattr(self.fn, 'storage_map', None))
  File 
"/home/bea/anaconda2/lib/python2.7/site-packages/theano/gof/link.py", line 
314, in raise_with_op
reraise(exc_type, exc_value, exc_trace)
  File 
"/home/bea/anaconda2/lib/python2.7/site-packages/theano/compile/function_module.py",
 
line 859, in __call__
outputs = self.fn()
RuntimeError: GpuCorrMM encountered a CUBLAS error: the library was not 
initialized
This could be a known bug in CUDA, please see the GpuCorrMM() documentation.

Apply node that caused the error: GpuCorrMM_gradWeights{valid, (1, 
1)}(GpuContiguous.0, GpuContiguous.0)
Toposort index: 28
Inputs types: [CudaNdarrayType(float32, (True, False, False, False)), 
CudaNdarrayType(float32, 4D)]
Inputs shapes: [(1, 500, 28, 28), (1, 20, 5, 5)]
Inputs strides: [(0, 784, 28, 1), (0, 25, 5, 1)]
Inputs values: ['not shown', 'not shown']
Outputs clients: [[GpuDimShuffle{1,0,2,3}(GpuCorrMM_gradWeights{valid, (1, 
1)}.0)]]

HINT: Re-running with most Theano optimization disabled could give you a 
back-trace of when this node was created. This can be done with by setting 
the Theano flag 'optimizer=fast_compile'. If that does not work, Theano 
optimizations can be disabled with 'optimizer=None'.
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and 
storage map footprint of this apply node.

I have tried to install cuda and cudnn, but it does not work (apart from 
the toolkit that i had already installed)

My theanorc file looks like:
[global]
device = gpu
floatX = float32


[blas]
ldflags = -lopenblas


[nvcc]
flags=-D_FORCE_INLINES


I would appreciate any advice or help.

Regards.

Beatriz.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Change Logistic Regression Cost

2017-10-20 Thread Beatriz G.
Hi.

I have an unbalance problem on my dataset, I have three or four times more 
positive data than negative. I would like that my CNN, at training, 
penalizes hardly when a negative samples is classified incorrectly. I think 
that I should change the cost in order to do that, but I do not know how. 
If someone could give me some tricks I would be very pleased.

I am using the logistic regression cost implemented in the theano tutorial.

Thank you very much in advance.

Kind regards.

Beatriz.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] How to change the learning rate dynamically

2017-10-04 Thread Beatriz G.
I have undestood it, and It works for me, I have modified the code so after 
training each epocf and after validate, the learning rate is changed.

The cost took nan because the learning rate value was too high, I am not 
pretty sure why.

El martes, 3 de octubre de 2017, 23:36:00 (UTC+2), Michael Klachko escribió:
>
> The learning rate schedule is defined in this line: 
> updates.append((l_r,0.95*l_r)), and is used in this line: train_model = 
> theano.function([index], cost, updates=updates ...
> If you don't understand what's going on, read about theano functions.
>
>
>
>
> On Thursday, September 7, 2017 at 6:58:34 AM UTC-7, Beatriz G. wrote:
>>
>> Hi. 
>>
>> I have tried to apply your code, but it does not work for me, I got "nan" 
>> at training cost.
>>
>> l_r = theano.shared(np.array(learning_rate, 
>> dtype=theano.config.floatX))
>> 
>> cost = layer8.negative_log_likelihood(y)
>> 
>>
>>validate_model = theano.function(
>> [index],
>> layer8.errors(y),
>> givens={
>> x: valid_set_x[index * batch_size: (index + 1) * batch_size],
>> y: valid_set_y[index * batch_size: (index + 1) * batch_size],
>> is_train: numpy.cast['int32'](0)
>>
>> }
>> )
>>
>> # create a list of all model parameters to be fit by gradient descent
>> params = layer0.params + layer1.params + layer2.params + 
>> layer3.params + layer4.params + layer5.params + layer6.params + 
>> layer7.params + layer8.params
>>
>> # create a list of gradients for all model parameters
>> grads = T.grad(cost, params)
>>
>> ## Learning rate update
>> updates = [
>> (param_i, param_i - l_r * grad_i)
>> for param_i, grad_i in zip(params, grads)
>> ]
>> updates.append((l_r,0.95*l_r))
>>
>> train_model = theano.function([index], cost, updates=updates,
>>   givens={
>>   x: train_set_x[index * batch_size: 
>> (index + 1) * batch_size],
>>   y: train_set_y[index * batch_size: 
>> (index + 1) * batch_size],
>>   is_train: np.cast['int32'](1)})
>> in the while loop:
>> cost_ij = train_model(minibatch_index)
>> when it is time to validate:
>>   validation_losses = [validate_model(i) for i in 
>> range(n_valid_batches)]
>>
>>
>> The learning rate updates its value, the validation error is 90% 
>> continuously and the training cost is "nan" 
>>
>> Thank you in advance.
>>
>> Regards.
>>
>>
>> El lunes, 6 de octubre de 2014, 18:40:20 (UTC+2), Pascal Lamblin escribió:
>>>
>>> On Mon, Oct 06, 2014, Frédéric Bastien wrote: 
>>> > Exacte. 
>>>
>>> Also, you can make l_r a shared variable, and update it via a Theano 
>>> expression, if it is convenient to do so. For instance: 
>>>
>>> l_r = theano.shared(np.array(0.1, dtype=theano.config.floatX)) 
>>> ... 
>>> updates.append((l_r, 0.9 * l_r))  # Or however you want to change your 
>>> learning rate 
>>>
>>> train_model = theano.function([index], cost, updates=updates, 
>>> givens=...) 
>>>
>>> > 
>>> > Fred 
>>> > 
>>> > On Mon, Oct 6, 2014 at 11:14 AM, Ofir Levy <micr...@gmail.com> wrote: 
>>> > 
>>> > > ok I think I got it 
>>> > > 
>>> > > learning_rate = 0.1 
>>> > > 
>>> > > l_r = T.scalar('l_r', dtype=theano.config.floatX) 
>>> > > 
>>> > > updates = []for param_i, grad_i in zip(params, grads): 
>>> > > updates.append((param_i, param_i - l_r * grad_i)) 
>>> > > 
>>> > > train_model = theano.function([index,l_r], cost, updates = updates, 
>>> > > givens={ 
>>> > > x: train_set_x[index * batch_size: (index + 1) * 
>>> batch_size], 
>>> > > y: train_set_y[index * batch_size: (index + 1) * 
>>> batch_size]}) 
>>> > > 
>>> > > and in the training loop: 
>>> > > 
>>> > > cost_ij = train_model(minibatch_index, learning_rate) 
>>> > > 
>>> > > 
>>> > > 
>>> > > On Monday, October 6, 2014 5:38:33 PM UTC+3, Ofir Le

Re: [theano-users] How to change the learning rate dynamically

2017-09-07 Thread Beatriz G.
Hi. 

I have tried to apply your code, but it does not work for me, I got "nan" 
at training cost.

l_r = theano.shared(np.array(learning_rate, dtype=theano.config.floatX))

cost = layer8.negative_log_likelihood(y)


   validate_model = theano.function(
[index],
layer8.errors(y),
givens={
x: valid_set_x[index * batch_size: (index + 1) * batch_size],
y: valid_set_y[index * batch_size: (index + 1) * batch_size],
is_train: numpy.cast['int32'](0)

}
)

# create a list of all model parameters to be fit by gradient descent
params = layer0.params + layer1.params + layer2.params + layer3.params 
+ layer4.params + layer5.params + layer6.params + layer7.params + 
layer8.params

# create a list of gradients for all model parameters
grads = T.grad(cost, params)

## Learning rate update
updates = [
(param_i, param_i - l_r * grad_i)
for param_i, grad_i in zip(params, grads)
]
updates.append((l_r,0.95*l_r))

train_model = theano.function([index], cost, updates=updates,
  givens={
  x: train_set_x[index * batch_size: 
(index + 1) * batch_size],
  y: train_set_y[index * batch_size: 
(index + 1) * batch_size],
  is_train: np.cast['int32'](1)})
in the while loop:
cost_ij = train_model(minibatch_index)
when it is time to validate:
  validation_losses = [validate_model(i) for i in 
range(n_valid_batches)]


The learning rate updates its value, the validation error is 90% 
continuously and the training cost is "nan" 

Thank you in advance.

Regards.


El lunes, 6 de octubre de 2014, 18:40:20 (UTC+2), Pascal Lamblin escribió:
>
> On Mon, Oct 06, 2014, Frédéric Bastien wrote: 
> > Exacte. 
>
> Also, you can make l_r a shared variable, and update it via a Theano 
> expression, if it is convenient to do so. For instance: 
>
> l_r = theano.shared(np.array(0.1, dtype=theano.config.floatX)) 
> ... 
> updates.append((l_r, 0.9 * l_r))  # Or however you want to change your 
> learning rate 
>
> train_model = theano.function([index], cost, updates=updates, givens=...) 
>
> > 
> > Fred 
> > 
> > On Mon, Oct 6, 2014 at 11:14 AM, Ofir Levy  > wrote: 
> > 
> > > ok I think I got it 
> > > 
> > > learning_rate = 0.1 
> > > 
> > > l_r = T.scalar('l_r', dtype=theano.config.floatX) 
> > > 
> > > updates = []for param_i, grad_i in zip(params, grads): 
> > > updates.append((param_i, param_i - l_r * grad_i)) 
> > > 
> > > train_model = theano.function([index,l_r], cost, updates = updates, 
> > > givens={ 
> > > x: train_set_x[index * batch_size: (index + 1) * 
> batch_size], 
> > > y: train_set_y[index * batch_size: (index + 1) * 
> batch_size]}) 
> > > 
> > > and in the training loop: 
> > > 
> > > cost_ij = train_model(minibatch_index, learning_rate) 
> > > 
> > > 
> > > 
> > > On Monday, October 6, 2014 5:38:33 PM UTC+3, Ofir Levy wrote: 
> > >> 
> > >> for the CNN example we currently have the following code: 
> > >> 
> > >> learning_rate = 0.1 
> > >> 
> > >> updates = []for param_i, grad_i in zip(params, grads): 
> > >> updates.append((param_i, param_i - learning_rate * 
> grad_i))train_model = theano.function([index], cost, updates = updates, 
> > >> givens={ 
> > >> x: train_set_x[index * batch_size: (index + 1) * 
> batch_size], 
> > >> y: train_set_y[index * batch_size: (index + 1) * 
> batch_size]}) 
> > >> 
> > >> and in the training loop: 
> > >> 
> > >> cost_ij = train_model(minibatch_index) 
> > >> 
> > >> 
> > >> can you kindly tell me how to change it to have a adaptive learning 
> rate? 
> > >> 
> > >> 
> > >> 
> > >> 
> > >> 
> > >> 
> > >> On Thursday, July 17, 2014 9:48:24 PM UTC+3, Frédéric Bastien wrote: 
> > >>> 
> > >>> Make a theano variable that is the learning rate and pass it as an 
> input 
> > >>> to your theano function. 
> > >>> 
> > >>> You could also use a shared variable is you don't want to pass it 
> > >>> explicitly each time, but only change it from time to time: 
> > >>> 
> > >>> http://deeplearning.net/software/theano/tutorial/ 
> > >>> examples.html#using-shared-variables 
> > >>> 
> > >>> Fred 
> > >>> 
> > >>> 
> > >>> On Thu, Jul 17, 2014 at 2:42 PM,  wrote: 
> > >>> 
> >  Hi, 
> >  
> >  I would to change the learning rate in learning procedure. A large 
> >  learning rate is used in the initial stage and small is used in 
> later. How 
> >  can I do. 
> >  
> >  Thanks 
> >  Jiancheng 
> >  
> >  -- 
> >  
> >  --- 
> >  You received this message because you are subscribed to the Google 
> >  Groups "theano-users" group. 
> >  To unsubscribe from this group and stop receiving emails from it, 
> send 
> >  an 

[theano-users] Re: Momentum term

2017-07-18 Thread Beatriz G.
Hi everyone.

I would like to know what is momentum used for, I think that has something 
to do with the weights updates, I have been reading information but I do 
not understand at all. It has something to do with the dynamic learning 
rate?

Regards.


El jueves, 6 de marzo de 2014, 17:25:01 (UTC+1), Al Docherty escribió:
>
> Hello again,
>
> I'm considering adding momentum to my neural network implementation. The 
> gradients and updates are calculated as so:
>
> ### OBTAIN PARAMETERS AND GRADIENTS 
>   gparams = []
>   for param in classifier.params:
> gparam = T.grad(printcost, param)
> gparams.append(gparam)
> 
>   ### CALCULATE CHANGE IN WEIGHTS
>   updates = [] 
>   for param, gparam in zip(classifier.params, gparams):
> updates.append((param, param-eta * gparam))
>
>
> I know I need to add the momentum term to the updates.append line. But how 
> do I store an old set of gradients? 
>
> Al
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Lenet gaussian inizialization

2017-01-25 Thread Beatriz G.
Hi everyone.

I am trying to check how the error changes and how the network works when 
the weights initialization and bias are changed. I have change those in 
Lenet deeplearning.net example.

I have changed in the convolutional class layer:

self.W = theano.shared(numpy.asarray(
rng.normal(mean, std, size=filter_shape),
dtype=theano.config.floatX
),
borrow=True
)

# the bias is a 1D tensor -- one bias per output feature map
b_values = numpy.ones((filter_shape[0],), dtype=theano.config.floatX)

And in the hidden layer the same:

if W is None:
W_values = numpy.asarray(rng.normal(mean, std, size = (n_in, n_out)
),
dtype=theano.config.floatX
)
if activation == theano.tensor.nnet.sigmoid:
W_values *= 4

W = theano.shared(value=W_values, name='W', borrow=True)

if b is None:
b_values = numpy.ones((n_out,), dtype=theano.config.floatX)
b = theano.shared(value=b_values, name='b', borrow=True)



The values of mean and std are 0 and 0.01 respectively. I have run the code 
and in training/validation I am getting errors about 89-90% in first epochs 
and then it gets stuck at 90.85% error rate 

I am doing something wrong when I am initializing weights? or anybody could 
help me to understand the results?

Thanks you in advance.

Beatriz.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Network does not update its weights

2017-01-16 Thread Beatriz G.
Hi.

I have been trying to develop imagenet, I have the base of LENET and 
modifying it and adding things I would like to get Imagenet. My problem is 
that weights are not updated so the network does not learn and I can not 
figure out where the problem is and I do not know what to do! I am really 
stressed 
and blocked. So any help or idea would be welcome!!

I have attached two files, "layers.py" where the layers are described and 
"network.py" where the architecture and the learning/testing processing is 
described and carried out.

Thank you very much in advance.

Regards.

Beatriz


-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
import numpy as np
import theano
import theano.tensor as T
from theano.tensor.signal import pool
from theano.tensor.nnet import conv2d
from pylearn2.expr.normalize import CrossChannelNormalization



class Fully_Connected_Dropout(object):
# http://christianherta.de/lehre/dataScience/machineLearning/neuralNetworks/Dropout.php

def __init__(self, rng, is_train, input, n_in, n_out, W=None, b=None, p=0.5):
self.input = input
# end-snippet-1

rng = np.random.RandomState(1234)
srng = T.shared_randomstreams.RandomStreams(rng.randint(99))

# for a discussion of the initialization, see
# https://plus.google.com/+EricBattenberg/posts/f3tPKjo7LFa
if W is None:
W_values = np.asarray(
rng.uniform(
low=-np.sqrt(6. / (n_in + n_out)),
high=np.sqrt(6. / (n_in + n_out)),
size=(n_in, n_out)
),
dtype=theano.config.floatX
)
W = theano.shared(value=W_values, name='W', borrow=True)

# init biases to positive values, so we should be initially in the linear regime of the linear rectified function
if b is None:
b_values = np.ones((n_out,), dtype=theano.config.floatX) * np.cast[theano.config.floatX](0.01)
b = theano.shared(value=b_values, name='b', borrow=True)

self.W = W
self.b = b

lin_output = T.dot(input, self.W) + self.b

output = theano.tensor.nnet.relu(lin_output)

# multiply output and drop -> in an approximation the scaling effects cancel out

input_drop = np.cast[theano.config.floatX](1. / p) * output

mask = srng.binomial(n=1, p=p, size=input_drop.shape, dtype=theano.config.floatX)
train_output = input_drop * mask

# is_train is a pseudo boolean theano variable for switching between training and prediction
self.output = T.switch(T.neq(is_train, 0), train_output, output)

# parameters of the model
self.params = [self.W, self.b]




class Fully_Connected_Softmax(object):
def __init__(self, rng, input, n_in, n_out, W=None, b=None,
 activation=theano.tensor.nnet.relu):

self.input = input



if W is None:
W_values = np.asarray(
rng.uniform(
low=-np.sqrt(6. / (n_in + n_out)),
high=np.sqrt(6. / (n_in + n_out)),
size=(n_in, n_out)
),
dtype=theano.config.floatX
)
if activation == theano.tensor.nnet.sigmoid:
W_values *= 4

W = theano.shared(value=W_values, name='W', borrow=True)

if b is None:
b_values = np.zeros((n_out,), dtype=theano.config.floatX)
b = theano.shared(value=b_values, name='b', borrow=True)

self.W = W
self.b = b

lin_output = T.nnet.softmax(T.dot(input, self.W) + self.b)

self.output = (
lin_output if activation is None
else activation(lin_output)
)


self.params = [self.W, self.b]






class LeNetConvPoolLRNLayer(object):
def __init__(self, rng, input, filter_shape, image_shape, poolsize=(2, 2), stride=(1, 1), lrn=False):
"""
Allocate a LeNetConvPoolLayer with shared variable internal parameters.

:type rng: numpy.random.RandomState
:param rng: a random number generator used to initialize weights

:type input: theano.tensor.dtensor4
:param input: symbolic image tensor, of shape image_shape

:type filter_shape: tuple or list of length 4
:param filter_shape: (number of filters, num input feature maps,
  filter height, filter width)

:type image_shape: tuple or list of length 4
:param image_shape: (batch size, num input feature maps,
 image height, image width)

:type poolsize: tuple or list of 

[theano-users] Know if your network is running properly

2016-11-29 Thread Beatriz G.
Hi everyone.

I have implemented my CNN. It works, but I do not know if it works well. I 
have used response normalization layer and I am not sure if I have used it 
correctly.

The code runs. But I do not know how to check if the CNN if it is working 
properly. Any suggestion?


Thank you for your help :)


-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] layer order implementation

2016-11-29 Thread Beatriz G.
Hi everyone.

I would like to implement a convolutional and pooling layer with response 
normalization and a relu as activation function. But I do not know if the 
response normalization layer has to be after or before the pooling layer 
and when the activaction funcion has to be placed too. I would like that 
the first layer would be the conv layer.

I am sorry if this is a very low-level question.

Thank you for you help.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Softmax is a classifier?

2016-11-24 Thread Beatriz G.
Hi Everyone, I am trying to build a cnn based in imagenet. The paper which 
I am following sais that the architecture is formed by convolutional layers 
and fully connected layers, and in the last layer, i.e. output layer is 
followed by softmax. Then, it sais that after extracting the features from 
the last fully connected layer, uses a SVM as a classifier.

I do not know if the input of the classifier is the output of the softmax.

And I thought that the softmax was a classifier, and I must be wrong


Regards.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: normalization code in CNN?

2016-11-22 Thread Beatriz G.
Could be used CrossChannelNormalization of pylearn as a local response 
normalization?

Regards.

El martes, 22 de noviembre de 2016, 16:15:08 (UTC+1), Beatriz G. escribió:
>
> Hi.
>
> I have the same problem.
>
> Anyone could help me?
>
> El miércoles, 30 de julio de 2014, 5:44:54 (UTC+2), xu shen escribió:
>>
>> Does anyone have the normalization code for LeNetPoolLayer?
>>
>> The convolution layer is defined as follows:
>> Class LeNetConvPoolLayer(object):
>>
>> conv_out=conv.conv2d(input=input, filters=self.W, 
>> image_shape=image_shape,filter_shape=filter_shape, subsample = subsampe)
>>  
>> norm_out = normalizer(conv_out,k,n,alpha,beta)# I want to do 
>> normalization here just as Alex Krizhevsky does in DCNN, but I can not find 
>> proper code to do this in theano.
>>
>> pooled_out = 
>> downsample.max_pool_2d(input=norm_out,ds=poolsize,ignore_border=True)
>>
>> I found out that it's hard to incorporate CrossChannelNormalization in 
>> pylearn2.expr.normalize for this task... 
>> can any one share your normalization theano code in CNN or how to use 
>> CrossChannelNormalizaiton in this code?
>> Thanks very much.
>>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: normalization code in CNN?

2016-11-22 Thread Beatriz G.
Hi.

I have the same problem.

Anyone could help me?

El miércoles, 30 de julio de 2014, 5:44:54 (UTC+2), xu shen escribió:
>
> Does anyone have the normalization code for LeNetPoolLayer?
>
> The convolution layer is defined as follows:
> Class LeNetConvPoolLayer(object):
>
> conv_out=conv.conv2d(input=input, filters=self.W, 
> image_shape=image_shape,filter_shape=filter_shape, subsample = subsampe)
>  
> norm_out = normalizer(conv_out,k,n,alpha,beta)# I want to do 
> normalization here just as Alex Krizhevsky does in DCNN, but I can not find 
> proper code to do this in theano.
>
> pooled_out = 
> downsample.max_pool_2d(input=norm_out,ds=poolsize,ignore_border=True)
>
> I found out that it's hard to incorporate CrossChannelNormalization in 
> pylearn2.expr.normalize for this task... 
> can any one share your normalization theano code in CNN or how to use 
> CrossChannelNormalizaiton in this code?
> Thanks very much.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error trying to use subsample=(2,2)

2016-11-17 Thread Beatriz G.
thank you.

El miércoles, 16 de noviembre de 2016, 21:41:15 (UTC+1), nouiz escribió:
>
> You are mixing old/new Theano with old/new version of LeNet. If you use 
> the dev version of Theano,
> make sure to update your LeNet example.
>
> Some parameter name changes.
>
> Make sure to use the dev version of Theano and the last version of LeNet.
>
>
> On Wed, Nov 16, 2016 at 6:57 AM, Beatriz G. <beaa...@gmail.com 
> > wrote:
>
>> Hi Everyone, I am trying to use strides (subsample) in LeNet example.
>>
>> So I have just changed:
>>
>>  # convolve input feature maps with filters
>> conv_out = conv2d(
>> input=input,
>> filters=self.W,
>> filter_shape=filter_shape,
>> input_shape=image_shape
>> )
>>
>> # pool each feature map individually, using maxpooling
>> pooled_out = pool.pool_2d(
>> input=conv_out,
>> ds=poolsize,
>> ignore_border=True
>> )
>>
>>
>>
>> into:
>>
>>
>>
>> conv_out = conv.conv2d(
>> input=input,
>> filters=self.W,
>> subsample=(2,2),
>> filter_shape=filter_shape,
>> input_shape=image_shape
>> )
>>
>> # downsample each feature map individually, using maxpooling
>> pooled_out = pool.pool_2d(
>> input=conv_out,
>> ds=poolsize,
>> ignore_border=True,
>> mode='max'
>> )
>>
>>
>> And I am having the following error:
>>
>>
>>
>> Traceback (most recent call last):
>> ... building the model
>>   File "/home/beaa/Escritorio/Theano/Lenet_original/ejemploCNN_LeNet.py", 
>> line 427, in 
>> evaluate_lenet5()
>>   File "/home/beaa/Escritorio/Theano/Lenet_original/ejemploCNN_LeNet.py", 
>> line 193, in evaluate_lenet5
>> poolsize=(2, 2)ut
>>   File "/home/beaa/Escritorio/Theano/Lenet_original/ejemploCNN_LeNet.py", 
>> line 105, in __init__
>> input_shape=image_shape
>>   File 
>> "/home/beaa/.local/lib/python2.7/site-packages/theano/tensor/nnet/conv.py", 
>> line 151, in conv2d
>> imshp=imshp, kshp=kshp, nkern=nkern, bsize=bsize, **kargs)
>> TypeError: __init__() got an unexpected keyword argument 'input_shape'
>>
>>
>>
>>
>>
>>
>> Also, I would k¡like to know if the 'fanout' variable had to be changed, 
>> because the number of output neurons would not be the same, any ideas?
>>
>> Thanks for your help.
>>
>> Regards.
>>
>> -- 
>>
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "theano-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to theano-users...@googlegroups.com .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: Dropout on MLP

2016-11-16 Thread Beatriz G.
Hi, did you solve this? Could you tell us how did you do it? I would 
appreciate it.

El martes, 8 de diciembre de 2015, 21:57:35 (UTC+1), J Zam escribió:
>
> Hi,
>
> This may have been asked before but I haven't found an answer for it in 
> the topics. I'm trying to apply dropout to an MLP with a linear regression 
> layer as output. My question is with regards to the dropout component, 
> after looking around, I have my dropout function as:
>
> def drop(input, rng, p=0.5): 
>
> srng = RandomStreams(rng.randint(99))
> 
> mask = srng.binomial(n=1, p=1.-p, size=input.shape)
> return input * T.cast(mask, theano.config.floatX) / (1.-p)
>
> I'm not sure I understand correctly, but why is there a need to divide by 
> (1. -p) ?
>
> Also, I have been reading that there is a need for re-scaling of weights 
> when dropout is applied:
>
>
> http://christianherta.de/lehre/dataScience/machineLearning/neuralNetworks/Dropout.php
> http://arxiv.org/pdf/1207.0580v1.pdf (A.1)
>
> and I'm not sure at what step to do this or what is it that it 
> accomplishes. 
>
> I'm trying to get my head around it and any help would be appreciate it.
>
> Thanks!
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Error trying to use subsample=(2,2)

2016-11-16 Thread Beatriz G.
Hi Everyone, I am trying to use strides (subsample) in LeNet example.

So I have just changed:

 # convolve input feature maps with filters
conv_out = conv2d(
input=input,
filters=self.W,
filter_shape=filter_shape,
input_shape=image_shape
)

# pool each feature map individually, using maxpooling
pooled_out = pool.pool_2d(
input=conv_out,
ds=poolsize,
ignore_border=True
)



into:



conv_out = conv.conv2d(
input=input,
filters=self.W,
subsample=(2,2),
filter_shape=filter_shape,
input_shape=image_shape
)

# downsample each feature map individually, using maxpooling
pooled_out = pool.pool_2d(
input=conv_out,
ds=poolsize,
ignore_border=True,
mode='max'
)


And I am having the following error:



Traceback (most recent call last):
... building the model
  File "/home/beaa/Escritorio/Theano/Lenet_original/ejemploCNN_LeNet.py", line 
427, in 
evaluate_lenet5()
  File "/home/beaa/Escritorio/Theano/Lenet_original/ejemploCNN_LeNet.py", line 
193, in evaluate_lenet5
poolsize=(2, 2)ut
  File "/home/beaa/Escritorio/Theano/Lenet_original/ejemploCNN_LeNet.py", line 
105, in __init__
input_shape=image_shape
  File 
"/home/beaa/.local/lib/python2.7/site-packages/theano/tensor/nnet/conv.py", 
line 151, in conv2d
imshp=imshp, kshp=kshp, nkern=nkern, bsize=bsize, **kargs)
TypeError: __init__() got an unexpected keyword argument 'input_shape'






Also, I would k¡like to know if the 'fanout' variable had to be changed, 
because the number of output neurons would not be the same, any ideas?

Thanks for your help.

Regards.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: ERROR (theano.sandbox.cuda): Failed to compile cuda_ndarray.cu Ubuntu16.04

2016-11-12 Thread Beatriz G.
Please, anyone could help me'???

El jueves, 10 de noviembre de 2016, 16:25:10 (UTC+1), Beatriz G. escribió:
>
> Hi, I am configuring the computer in order to work with GPU.
>
> Without GPU theano worked well, but when I created the .theanorc with the 
> recomendated flags, it changed.
>
> First of all I will tell you my nvidia, g++ and nvcc version:
>
>
> +-+
> | NVIDIA-SMI 367.57 Driver Version: 
> 367.57|
>
> |---+--+--+
> | GPU  NamePersistence-M| Bus-IdDisp.A | Volatile Uncorr. 
> ECC |
> | Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util  Compute 
> M. |
>
> |===+==+==|
> |   0  GeForce GTX 750 Ti  Off  | :06:00.0  On |  
> N/A |
> | 33%   28CP8 1W /  46W | 86MiB /  1999MiB |  0%  
> Default |
>
> +---+--+--+
>   
>  
>
>
> +-+
> | Processes:   GPU 
> Memory |
> |  GPU   PID  Type  Process name   
> Usage  |
>
> |=|
> |0  1603G   /usr/lib/xorg/Xorg   
> 5MiB |
> |0  3382G   /usr/lib/xorg/Xorg  
> 79MiB |
>
> +-+
>
>
> nvcc: Cuda compilation tools, release 7.5, V7.5.17
> g++ (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609
>
> As you can see I use Ubuntu 16.04
>
> Here is my error:
>
> In file included from /home/bea/anaconda2/include/python2.7/Python.h:8:0,
>  from mod.cu:3:
> /home/bea/anaconda2/include/python2.7/pyconfig.h:1193:0: warning: 
> "_POSIX_C_SOURCE" redefined
>  #define _POSIX_C_SOURCE 200112L
>  ^
> In file included from /usr/include/host_config.h:161:0,
>  from /usr/include/cuda_runtime.h:76,
>  from :0:
> /usr/include/features.h:228:0: note: this is the location of the previous 
> definition
>  # define _POSIX_C_SOURCE 200809L
>  ^
> In file included from /home/bea/anaconda2/include/python2.7/Python.h:8:0,
>  from mod.cu:3:
> /home/bea/anaconda2/include/python2.7/pyconfig.h:1215:0: warning: 
> "_XOPEN_SOURCE" redefined
>  #define _XOPEN_SOURCE 600
>  ^
> In file included from /usr/include/host_config.h:161:0,
>  from /usr/include/cuda_runtime.h:76,
>  from :0:
> /usr/include/features.h:169:0: note: this is the location of the previous 
> definition
>  # define _XOPEN_SOURCE 700
>  ^
> mod.cu(941): warning: pointless comparison of unsigned integer with zero
> mod.cu(3001): warning: conversion from a string literal to "char *" is 
> deprecated
> mod.cu(3004): warning: conversion from a string literal to "char *" is 
> deprecated
> mod.cu(3006): warning: conversion from a string literal to "char *" is 
> deprecated
> mod.cu(3009): warning: conversion from a string literal to "char *" is 
> deprecated
> mod.cu(3011): warning: conversion from a string literal to "char *" is 
> deprecated
> mod.cu(3014): warning: conversion from a string literal to "char *" is 
> deprecated
> mod.cu(3017): warning: conversion from a string literal to "char *" is 
> deprecated
> mod.cu(3020): warning: conversion from a string literal to "char *" is 
> deprecated
> mod.cu(3022): warning: conversion from a string literal to "char *" is 
> deprecated
> mod.cu(3025): warning: conversion from a string literal to "char *" is 
> deprecated
> mod.cu(3027): warning: conversion from a string literal to "char *" is 
> deprecated
> mod.cu(3030): warning: conversion from a string literal to "char *" is 
> deprecated
> mod.cu(3032): warning: conversion from a string literal to "char *" is 
> deprecated
> mod.cu(3035): warning: conversion from a string literal to "char *" is 
> deprecated
> mod.cu(3038): warning: conversion from a string literal to "char *" is 
> deprecated
> mod.cu(3041): warning: conversion from a string literal to "char *" is 
> deprecated
> mod.cu

[theano-users] ERROR (theano.sandbox.cuda): Failed to compile cuda_ndarray.cu Ubuntu16.04

2016-11-10 Thread Beatriz G.
Hi, I am configuring the computer in order to work with GPU.

Without GPU theano worked well, but when I created the .theanorc with the 
recomendated flags, it changed.

First of all I will tell you my nvidia, g++ and nvcc version:

+-+
| NVIDIA-SMI 367.57 Driver Version: 
367.57|
|---+--+--+
| GPU  NamePersistence-M| Bus-IdDisp.A | Volatile Uncorr. 
ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util  Compute 
M. |
|===+==+==|
|   0  GeForce GTX 750 Ti  Off  | :06:00.0  On |  
N/A |
| 33%   28CP8 1W /  46W | 86MiB /  1999MiB |  0%  
Default |
+---+--+--+
   

+-+
| Processes:   GPU 
Memory |
|  GPU   PID  Type  Process name   
Usage  |
|=|
|0  1603G   /usr/lib/xorg/Xorg   
5MiB |
|0  3382G   /usr/lib/xorg/Xorg  
79MiB |
+-+


nvcc: Cuda compilation tools, release 7.5, V7.5.17
g++ (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609

As you can see I use Ubuntu 16.04

Here is my error:

In file included from /home/bea/anaconda2/include/python2.7/Python.h:8:0,
 from mod.cu:3:
/home/bea/anaconda2/include/python2.7/pyconfig.h:1193:0: warning: 
"_POSIX_C_SOURCE" redefined
 #define _POSIX_C_SOURCE 200112L
 ^
In file included from /usr/include/host_config.h:161:0,
 from /usr/include/cuda_runtime.h:76,
 from :0:
/usr/include/features.h:228:0: note: this is the location of the previous 
definition
 # define _POSIX_C_SOURCE 200809L
 ^
In file included from /home/bea/anaconda2/include/python2.7/Python.h:8:0,
 from mod.cu:3:
/home/bea/anaconda2/include/python2.7/pyconfig.h:1215:0: warning: 
"_XOPEN_SOURCE" redefined
 #define _XOPEN_SOURCE 600
 ^
In file included from /usr/include/host_config.h:161:0,
 from /usr/include/cuda_runtime.h:76,
 from :0:
/usr/include/features.h:169:0: note: this is the location of the previous 
definition
 # define _XOPEN_SOURCE 700
 ^
mod.cu(941): warning: pointless comparison of unsigned integer with zero
mod.cu(3001): warning: conversion from a string literal to "char *" is 
deprecated
mod.cu(3004): warning: conversion from a string literal to "char *" is 
deprecated
mod.cu(3006): warning: conversion from a string literal to "char *" is 
deprecated
mod.cu(3009): warning: conversion from a string literal to "char *" is 
deprecated
mod.cu(3011): warning: conversion from a string literal to "char *" is 
deprecated
mod.cu(3014): warning: conversion from a string literal to "char *" is 
deprecated
mod.cu(3017): warning: conversion from a string literal to "char *" is 
deprecated
mod.cu(3020): warning: conversion from a string literal to "char *" is 
deprecated
mod.cu(3022): warning: conversion from a string literal to "char *" is 
deprecated
mod.cu(3025): warning: conversion from a string literal to "char *" is 
deprecated
mod.cu(3027): warning: conversion from a string literal to "char *" is 
deprecated
mod.cu(3030): warning: conversion from a string literal to "char *" is 
deprecated
mod.cu(3032): warning: conversion from a string literal to "char *" is 
deprecated
mod.cu(3035): warning: conversion from a string literal to "char *" is 
deprecated
mod.cu(3038): warning: conversion from a string literal to "char *" is 
deprecated
mod.cu(3041): warning: conversion from a string literal to "char *" is 
deprecated
mod.cu(3043): warning: conversion from a string literal to "char *" is 
deprecated
mod.cu(3046): warning: conversion from a string literal to "char *" is 
deprecated
mod.cu(3048): warning: conversion from a string literal to "char *" is 
deprecated
mod.cu(3051): warning: conversion from a string literal to "char *" is 
deprecated
In file included from /home/bea/anaconda2/include/python2.7/Python.h:8:0,
 from mod.cu:3:
/home/bea/anaconda2/include/python2.7/pyconfig.h:1193:0: warning: 
"_POSIX_C_SOURCE" redefined
 #define _POSIX_C_SOURCE 200112L
 ^
In file included from /usr/include/host_config.h:161:0,
 from /usr/include/cuda_runtime.h:76,
 from :0:
/usr/include/features.h:228:0: note: this is the location of the previous 
definition
 # define _POSIX_C_SOURCE 200809L
 ^
In file included from 

Re: [theano-users] Question about LeNet-5 hidden layer as implemented on deeplearning.net

2016-11-10 Thread Beatriz G.
yes, that was the question! thank you!

El miércoles, 9 de noviembre de 2016, 15:26:32 (UTC+1), nouiz escribió:
>
> I'm not sure I understand the question correctly. But I think the answer 
> is yes.
>
> Hidden layer in an MLP are fully connected layers.
>
> Fred
>
> On Wed, Nov 9, 2016 at 7:26 AM, Beatriz G. <beaa...@gmail.com 
> > wrote:
>
>> Hi, 
>>
>> Could be used a independant hidden layer of mlp as a fully connected 
>> layer?
>>
>> regards.
>>
>>
>> El viernes, 13 de marzo de 2015, 18:25:38 (UTC+1), Pascal Lamblin 
>> escribió:
>>>
>>> Hi, 
>>>
>>> On Fri, Mar 13, 2015, Orry Messer wrote: 
>>> > I don't quite understand the structure of the network after the second 
>>> > convolution/pooling layer and just before the hidden layer. 
>>> > I think what is confusing me is the batch aspect of it. 
>>> > With a batch size of 500, the hidden layer will have 500 units. This 
>>> much 
>>> > I'm ok with. 
>>>
>>> That's not right... by coincidence, the batch size in 500, which is 
>>> also the number of output units of that layer. Each of these units will 
>>> compute a different value for each of the examples in the batch, so the 
>>> output of that layer will be (batch_size, n_outputs) or (500, 500). 
>>>
>>> > But what is the input to this hidden layer? In the comments it says 
>>> that 
>>> > the hidden layer operates on matrices of size (batch_size, 
>>> pixel_size), 
>>> > which in the case of this code 
>>> > is (500, 800). 
>>>
>>> That's correct. 
>>>
>>> > Does this then mean that each hidden unit has 500*800 inputs? 
>>>
>>> Not really. Each hidden unit has 800 scalar inputs. Each of these inputs 
>>> will take a different value for each example of the minibatch, and the 
>>> neuron will also output one value for each example. 
>>>
>>> > And since there are 500 hidden units, does this then mean that there 
>>> > are a total of 500*(500*800) connections and as many weights to tune 
>>> > in this layer alone? 
>>>
>>> No, the weights are the same for all the examples of the minibatch. 
>>>
>>> -- 
>>> Pascal 
>>>
>> -- 
>>
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "theano-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to theano-users...@googlegroups.com .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Question about LeNet-5 hidden layer as implemented on deeplearning.net

2016-11-09 Thread Beatriz G.
Hi, 

Could be used a independant hidden layer of mlp as a fully connected layer?

regards.

El viernes, 13 de marzo de 2015, 18:25:38 (UTC+1), Pascal Lamblin escribió:
>
> Hi, 
>
> On Fri, Mar 13, 2015, Orry Messer wrote: 
> > I don't quite understand the structure of the network after the second 
> > convolution/pooling layer and just before the hidden layer. 
> > I think what is confusing me is the batch aspect of it. 
> > With a batch size of 500, the hidden layer will have 500 units. This 
> much 
> > I'm ok with. 
>
> That's not right... by coincidence, the batch size in 500, which is 
> also the number of output units of that layer. Each of these units will 
> compute a different value for each of the examples in the batch, so the 
> output of that layer will be (batch_size, n_outputs) or (500, 500). 
>
> > But what is the input to this hidden layer? In the comments it says that 
> > the hidden layer operates on matrices of size (batch_size, pixel_size), 
> > which in the case of this code 
> > is (500, 800). 
>
> That's correct. 
>
> > Does this then mean that each hidden unit has 500*800 inputs? 
>
> Not really. Each hidden unit has 800 scalar inputs. Each of these inputs 
> will take a different value for each example of the minibatch, and the 
> neuron will also output one value for each example. 
>
> > And since there are 500 hidden units, does this then mean that there 
> > are a total of 500*(500*800) connections and as many weights to tune 
> > in this layer alone? 
>
> No, the weights are the same for all the examples of the minibatch. 
>
> -- 
> Pascal 
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: response normalization layer

2016-11-09 Thread Beatriz G.
I just want add that the response-normalization that I woudl like to apply 
is the local one.

Regards

El martes, 8 de noviembre de 2016, 12:30:49 (UTC+1), Beatriz G. escribió:
>
> Also, i do not know if it is possible, in the conv2d layer class  add 
> something to make the output response-normalized. Any help would be welcome.
>
> Thank you.
>
> El martes, 8 de noviembre de 2016, 12:21:59 (UTC+1), Beatriz G. escribió:
>>
>> Anyone knows if there is in theano something that works as a response 
>> normalization layer? If I have to implemet it an anyone knows how to do it, 
>> any guide would be welcome.
>>
>> Regards. 
>>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: response normalization layer

2016-11-08 Thread Beatriz G.
Also, i do not know if it is possible, in the conv2d layer class  add 
something to make the output response-normalized. Any help would be welcome.

Thank you.

El martes, 8 de noviembre de 2016, 12:21:59 (UTC+1), Beatriz G. escribió:
>
> Anyone knows if there is in theano something that works as a response 
> normalization layer? If I have to implemet it an anyone knows how to do it, 
> any guide would be welcome.
>
> Regards. 
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] response normalization layer

2016-11-08 Thread Beatriz G.
Anyone knows if there is in theano something that works as a response 
normalization layer? If I have to implemet it an anyone knows how to do it, 
any guide would be welcome.

Regards.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Why GPU can not speed up?

2016-11-08 Thread Beatriz G.
Hi Pascal, I have a error: TypeError: Cannot convert Type TensorType(uint8, 
matrix) (of Variable Subtensor{int64:int64:}.0) into Type 
TensorType(float64, matrix). You can try to manually convert 
Subtensor{int64:int64:}.0 into a TensorType(float64, matrix).
 
when I created te shared variables: 

train_set_x = theano.shared(numpy.array(train_set_x))
test_set_x = theano.shared(numpy.array(test_set_x))
train_set_y = theano.shared(numpy.array(y_train))
test_set_y = theano.shared(numpy.array(y_test))
valid_set_x = theano.shared(numpy.array(valid_set_x))
valid_set_y = theano.shared(numpy.array(y_val))


If I do not force to be float64 and int32 it does not work:


train_set_x = theano.shared(numpy.array(train_set_x,dtype=float64))
test_set_x = theano.shared(numpy.array(test_set_x, dtype=float64))
train_set_y = theano.shared(numpy.array(y_train, dtype=int32))
test_set_y = theano.shared(numpy.array(y_test, dtype=int32))
valid_set_x = theano.shared(numpy.array(valid_set_x, dtype=float64))
valid_set_y = theano.shared(numpy.array(y_val,  dtype=int32))



But If I do that, it does not work in gpu. And I want to use gpu. Could you 
help me, please??



El martes, 9 de junio de 2015, 23:47:21 (UTC+2), Pascal Lamblin escribió:
>
> Hi, 
>
> You seem to be using explicitly double-precision variables, which will 
> not be computed on the GPU using the default back-end. 
> You should explicitly use dtype='float32', T.fmatrix(), T.fvector(), etc., 
> or use dtype=theano.config.floatX, T.matrix, T.vector(), and explicitly 
> set "floatX=float32" in the Theano flags. 
>
> On Tue, Jun 09, 2015, Qixianbiao Qixianbiao wrote: 
> > Hello, 
> > 
> >  I am new to theano although I have used some other DL toolbox. 
> >  Recently, I write one Lenet5 CNN code based on theano.  After long time 
> > struggle, I finally can run it, but I find the decreasing speed of the 
> cost 
> > value is very slow. Meanwhile, the running speed on both GPU and CPU is 
> > similar although it shows that "Using GPU device 0: Tesla K20".   Any 
> > comments for the code style and denotation will be greatly appreciated. 
> > 
> > 
> > 
> > import theano 
> > import theano.tensor as T 
> > from theano.sandbox.rng_mrg import MRG_RandomStreams as RandomStreams 
> > from theano.tensor.signal import downsample 
> > from theano.tensor.nnet import conv 
> > import numpy as np 
> > import cPickle 
> > 
> > rng = RandomStreams(1234) 
> > 
> > 
> > 
> > def loadData(path='mnist.pkl.gz'): 
> > train_set, valid_set, test_set = cPickle.load(file('mnist.pkl', 
> 'r')) 
> > trX, trY = train_set 
> > valX, valY = valid_set 
> > teX, teY = test_set 
> > return trX, trY, valX, valY, teX, teY 
> > 
> > def dropout(X, p1): 
> > if p1>0.0: 
> > keep = 1 - p1 
> > X *= rng.binomial(X.shape, p=keep, dtype='float32') 
> > X = X/(1-p1) 
> > return X 
> > 
> > def rectify(X): 
> > return T.maximum(X, 0) 
> > 
> > def softmax(X, w, b): 
> > return T.nnet.softmax( T.dot(X, w) + b ) 
> > 
> > def pooling(X, poolsize=(2, 2)): 
> > return downsample.max_pool_2d(X, poolsize) 
> > 
> > def hiddenlayer(X, w, b): 
> > return T.dot(X, w) + b 
> > 
> > def convlayer(X, w, b): 
> > print X.dtype 
> > print w.dtype 
> > result = conv.conv2d(X, w) + b.dimshuffle('x', 0, 'x', 'x') 
> > return result 
> > 
> > def model(input, w1, b1, w2, b2, wh, bh, wo, bo): 
> > layer1 = convlayer(input, w1, b1) 
> > r1 = rectify(layer1) 
> > p1 = pooling(r1) 
> > d1 = dropout(p1, 0.2) 
> > 
> > layer2 = convlayer(d1, w2, b2) 
> > r2 = rectify(layer2) 
> > p2 = pooling(r2) 
> > d2 = dropout(p2, 0.4) 
> > 
> > d2flat = T.flatten(d2, outdim=2) 
> > hidden1 = hiddenlayer(d2flat, wh, bh) 
> > r2 = rectify(hidden1) 
> > 
> > out = softmax(r2, wo, bo) 
> > 
> > return out 
> > 
> > def costfunction(out, y): 
> > return T.mean( T.nnet.categorical_crossentropy(out, y) ) 
> > 
> > 
> > X = T.dtensor4() 
> > Y = T.dmatrix() 
> > 
> > 
> > w1np = np.zeros((6, 1, 5, 5), dtype='float64') 
> > w1 = theano.shared(w1np, borrow=True) 
> > b1np = np.zeros((6, ), dtype='float64') 
> > b1 = theano.shared(b1np, borrow=True) 
> > 
> > 
> > w2np = np.zeros((12, 6, 3, 3), dtype='float64') 
> > w2 = theano.shared(w2np, borrow=True) 
> > b2np = np.zeros((12, ), dtype='float64') 
> > b2 = theano.shared(b2np, borrow=True) 
> > 
> > 
> > whnp = np.zeros((300, 64), dtype='float64') 
> > wh = theano.shared(whnp, borrow=True) 
> > bhnp = np.zeros((64, ), dtype='float64') 
> > bh = theano.shared(bhnp, borrow=True) 
> > 
> > wonp = np.zeros((64, 10), dtype='float64') 
> > wo = theano.shared(wonp, borrow=True) 
> > bonp = np.zeros((10, ), dtype='float64') 
> > bo = theano.shared(bonp, borrow=True) 
> > 
> > 
> > """ 
> > w1 = theano.shared(np.asarray(np.random.uniform(-1.0/25, 1.0/25, (6, 1, 
> 5, 5)), dtype=theano.config.floatX)) 
> > b1 = theano.shared(np.asarray(np.random.uniform(-1.0/25, 1.0/25, 

[theano-users] Re: Load JPEG images

2016-11-07 Thread Beatriz G.
I use os library for have images from folders. It does not have anything to 
do with theano. Have a look into this library.
When you have one image just read it.

I hope I have helped.

El miércoles, 2 de noviembre de 2016, 18:23:11 (UTC+1), David Chik escribió:
>
> I have prepared: 
> 5000 JPEG images of cats for training in a folder train/cat/
> 5000 images of dogs for training in a folder train/dog/  
> 1000 cats for validation in a folder valid/cat/
> 1000 dogs for validation in valid/dog/
> 1000 cats for testing in a folder test/cat/
> 1000 dogs for testing in test/dog/
>  
> Now, can someone kindly give me an example code of load_data, so I can 
> input the above data?
>
> Thank you very much indeed.
>
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] ypeError: Cannot convert Type TensorType

2016-11-07 Thread Beatriz G.
Hi eveyone.


I am having the popular error TypeError: Cannot convert Type TensorType. I 
am not able to solve it using the help in other posts.

I am using my own database and I am using the LENET architecture.


This is my data. I read it in a different function image per image:

croped_Scale = frame[arr:abj, izq:dcha]
image = cv2.resize(croped_Scale, (128, 128))
aux_vect = np.ravel(image)
aux_X.append(aux_vect)




And then I use it in LENET:

train_set_x = theano.shared(numpy.array(train_set_x))
test_set_x = theano.shared(numpy.array(test_set_x))
train_set_y = theano.shared(numpy.array(y_train))
test_set_y = theano.shared(numpy.array(y_test))
valid_set_x = theano.shared(numpy.array(valid_set_x))
valid_set_y = theano.shared(numpy.array(y_val))



whose shape is:


(2324, 49152) (664, 49152) (332, 49152) For samples (training, test and 
validation)
(2324,) (664,) (332,) and for labels (training, test and validation)



Here is my error: TypeError: Cannot convert Type TensorType(uint8, matrix) 
(of Variable Subtensor{int64:int64:}.0) into Type TensorType(float64, 
matrix). You can try to manually convert Subtensor{int64:int64:}.0 into a 
TensorType(float64, matrix).
And the line where it is produced is in: y: test_set_y[index * 
batch_size: (index + 1) * batch_size]


If use the flatten function in that way:

train_set_x = theano.shared(numpy.array(train_set_x).flatten())
test_set_x = theano.shared(numpy.array(test_set_x).flatten())
train_set_y = theano.shared(numpy.array(y_train).flatten())
test_set_y = theano.shared(numpy.array(y_test).flatten())
valid_set_x = theano.shared(numpy.array(valid_set_x).flatten())
valid_set_y = theano.shared(numpy.array(y_val).flatten())


I get the following error:TypeError: Cannot convert Type TensorType(uint8, 
vector) (of Variable Subtensor{int64:int64:}.0) into Type TensorType(float64, 
matrix). You can try to manually convert Subtensor{int64:int64:}.0 into a 
TensorType(float64, matrix).

 

If I force the samples to be float64 and the labels to be int32 it works, 
but I do not want to do that because it would not work with GPU because of 
the line in theanorc: floatX = float32

I have been reading in the forum I have not get the solution.

Anyone could help me? I would like to run the code in GPU

Regards.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] UserWarning: downsample module has been moved to the pool module

2016-10-07 Thread Beatriz G.
Thank you for your help!!

Regards.

El jueves, 6 de octubre de 2016, 21:14:20 (UTC+2), Pascal Lamblin escribió:
>
> On Thu, Oct 06, 2016, Beatriz G. wrote: 
> > I can not use  downsamaple.max_pool_2d, how it is called now? 
> > theano.tensor.signal.pool.pool_2d () or theano.tensor.signal.pool. 
> > max_pool_2d_same_size() 
>
> The equivalent is pool_2d(..., mode='max'). 
>
> max_pool_2d_same_size is used when you want to zero out the elements 
> that are not the local max, and keep a tensor that has the same size. 
>
>
> > 
> > regards. 
> > 
> > El martes, 1 de marzo de 2016, 3:02:48 (UTC+1), Michael Klachko 
> escribió: 
> > > 
> > > To answer my own question, the new code for average pooling is about 4 
> > > times faster. 
> > > 
> > > 
> > > 
> > > On Friday, February 26, 2016 at 2:05:10 PM UTC-8, Michael Klachko 
> wrote: 
> > >> 
> > >> Thanks! Actually, it's theano.tensor.signal.pool. 
> > >> Previously, to implement averaging pooling operation, I used the 
> > >> following code: 
> > >> 
> > >> pooled_out = TSN.images2neibs(ten4=conv_out, neib_shape= 
> > >> poolsize, mode='ignore_borders').mean(axis=-1) 
> > >> new_shape = T.cast(T.join(0, conv_out.shape[:-2], 
> > >>   
>  T.as_tensor([conv_out.shape[2 
> > >> ]/poolsize[0]]), 
> > >>   
>  T.as_tensor([conv_out.shape[3 
> > >> ]/poolsize[1]])), 'int64') 
> > >> pooled_out = T.reshape(pooled_out, new_shape, ndim=4) 
> > >> 
> > >> Where TSN is theano.sandbox.neighbours. I wonder if there's any speed 
> > >> advantage to replace this code with: 
> > >> 
> > >> pooled_out = pool.pool_2d(conv_out, ds=poolsize, ignore_border=True, 
> > >> st=None, padding=(0,0), mode=pooltype) 
> > >> 
> > >> 
> > >> 
> > >> On Friday, February 26, 2016 at 6:06:12 AM UTC-8, nouiz wrote: 
> > >>> 
> > >>> theano.tensor.nnet.pool 
> > >>> 
> > >>> Fred 
> > >>> 
> > >>> On Thu, Feb 25, 2016 at 7:56 PM, Michael Klachko <
> michael...@gmail.com> 
> > >>> wrote: 
> > >>> 
> > >>>> I'm running the latest dev version, where do I find this new 'pool' 
> > >>>> module? 
> > >>>> 
> > >>>> -- 
> > >>>> 
> > >>>> --- 
> > >>>> You received this message because you are subscribed to the Google 
> > >>>> Groups "theano-users" group. 
> > >>>> To unsubscribe from this group and stop receiving emails from it, 
> send 
> > >>>> an email to theano-users...@googlegroups.com. 
> > >>>> For more options, visit https://groups.google.com/d/optout. 
> > >>>> 
> > >>> 
> > >>> 
> > 
> > -- 
> > 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "theano-users" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to theano-users...@googlegroups.com . 
> > For more options, visit https://groups.google.com/d/optout. 
>
>
> -- 
> Pascal 
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Problem with blas and downsample module

2016-10-06 Thread Beatriz G.
Hi eveyone, I have just installed theano in my computer in windows and I am 
able to import theano, but when I have tried to import downsample module 
and I have gotten the following warning and error:

WARNING (theano.tensor.blas): Failed to import scipy.linalg.blas, and 
Theano flag blas.ldflags is empty. Falling back on slower implementations 
for dot(matrix, vector), dot(vector, matrix) and dot(vector, vector) (No 
module named scipy.linalg.blas)
Traceback (most recent call last):
  File "C:/Users/FRAV/Desktop/Beatriz/CASIA_VDEO_LENET/casia_Lenet.py", 
line 10, in 
from theano.tensor.signal import downsample
ImportError: cannot import name downsample

Process finished with exit code 1


Anyone could help me? 

The code in my ubuntu laptop works fine.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] output of conv layer using stride

2016-10-04 Thread Beatriz G.
Hi eveyone:

I would like to calculate the size of the images at the output of a 
convolutional layer (or convloutional + max-pool layer) to give that value 
to the next layer.

Anyone could help me?

Regards.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: get test labels

2016-08-03 Thread Beatriz G.
It wooeks!!!

thank you, thank you very much!!!

El miércoles, 3 de agosto de 2016, 3:38:36 (UTC+2), Jesse Livezey escribió:
>
> I just changed
>
> salidas_capa3[test_model(i)]
>
> to
>
> salidas_capa3(i)
>
>
> the function salidas_capa3 expects a batch index as an argument.
>
> On Sunday, July 31, 2016 at 3:16:45 PM UTC-4, Beatriz G. wrote:
>>
>> Is it not what I have given to salidas_capa3?
>>
>> I am really thankful for your help, really, really thankful.
>>
>>
>> El viernes, 29 de julio de 2016, 4:00:51 (UTC+2), Jesse Livezey escribió:
>>>
>>> I think you just want to do
>>>
>>> for i in range(n_test_batches):
>>> test_losses = [test_model(i)]
>>> y_pred_test = salidas_capa3(i)
>>> print y_pred_test
>>>
>>>
>>> The salidas_capa3 function expects a minibatch index as an argument.
>>>
>>> On Wednesday, July 27, 2016 at 11:27:08 PM UTC-7, Beatriz G. wrote:
>>>>
>>>> I am not able of extract the value of that function at that point, I 
>>>> have debugged and I I have gotten the results of test_model in the 
>>>> attached 
>>>> pic.
>>>>
>>>> Thank you for your help.
>>>>
>>>>
>>>>
>>>> What is the value of test_model(i) at that point? I think it should be 
>>>>> an array of indices.
>>>>>
>>>>> On Wednesday, July 27, 2016 at 1:52:27 AM UTC-7, Beatriz G. wrote:
>>>>>>
>>>>>> Hi Jesse, thank you for your reply.
>>>>>>
>>>>>> I have tried to use it when I test:
>>>>>>
>>>>>> #Aqui se tiene que cargar la red
>>>>>>
>>>>>> layer0.W.set_value(w0_test)
>>>>>> layer0.b.set_value(b0_test)
>>>>>>
>>>>>> layer1.W.set_value(w1_test)
>>>>>> layer1.b.set_value(b1_test)
>>>>>>
>>>>>> layer2.W.set_value(w2_test)
>>>>>> layer2.b.set_value(b2_test)
>>>>>>
>>>>>> # test it on the test set
>>>>>> for i in range(n_test_batches):
>>>>>> test_losses = [test_model(i)]
>>>>>> y_pred_test = salidas_capa3[test_model(i)]
>>>>>> print y_pred_test
>>>>>> test_score = numpy.mean(test_losses)
>>>>>>
>>>>>> print((' test error of best model %f %%') % (test_score * 100.))
>>>>>>
>>>>>>
>>>>>>
>>>>>> but I get the following error:
>>>>>>
>>>>>>
>>>>>> Traceback (most recent call last):
>>>>>>   File "/home/beaa/Escritorio/Theano/Separando_Lenet.py", line 414, in 
>>>>>> 
>>>>>> evaluate_lenet5()
>>>>>>   File "/home/beaa/Escritorio/Theano/Separando_Lenet.py", line 390, in 
>>>>>> evaluate_lenet5
>>>>>> y_pred_test = salidas_capa3[test_model(i)]
>>>>>>   File 
>>>>>> "/home/beaa/.local/lib/python2.7/site-packages/theano/compile/function_module.py",
>>>>>>  line 545, in __getitem__
>>>>>> return self.value[item]
>>>>>>   File 
>>>>>> "/home/beaa/.local/lib/python2.7/site-packages/theano/compile/function_module.py",
>>>>>>  line 480, in __getitem__
>>>>>> s = finder[item]
>>>>>> TypeError: unhashable type: 'numpy.ndarray'
>>>>>>
>>>>>>
>>>>>>
>>>>>> and I do not know what produces it.
>>>>>>
>>>>>>
>>>>>> Regards
>>>>>>
>>>>>>
>>>>>> El miércoles, 27 de julio de 2016, 2:29:24 (UTC+2), Jesse Livezey 
>>>>>> escribió:
>>>>>>>
>>>>>>> You should be able to use this function to output y_pred
>>>>>>>
>>>>>>> salidas_capa3 = theano.function(
>>>>>>> [index],
>>>>>>> layer3.y_pred,
>>>>>>> givens={
>>>>>>> x: test_set_x[index * batch_size: (index + 1) * batch_size],
>>>>>>> }
>>>>>>> )
>>>>>>>
>>>>>>>
>>>>>>> On Monday, July 25, 2016 at 3:09:09 AM UTC-7, Beatriz G. wrote:
>>>>>>>>
>>>>>>>> Hi, anyone knows how to get the test labels that the classifier has 
>>>>>>>> given to the data? 
>>>>>>>>
>>>>>>>> I would like to extrat the data that has not been well classified.
>>>>>>>>
>>>>>>>> Regards.
>>>>>>>>
>>>>>>>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: How to calculate precision, accuracy, and recall in Theano?

2016-08-02 Thread Beatriz G.
HI guys, how do you get the predictions? I woud like to calculate those 
parameters but I can not If I do not get the predictions.

regards.



El sábado, 30 de julio de 2016, 7:37:39 (UTC+2), Ivy Junior escribió:
>
> In theano, "for" can be subsititued by "scan", and it worked!
>
> true_pos, _ = theano.scan(fn = lambda nx, y, y_pred, mask: 
> (tensor.eq(y, nx)*tensor.eq(y_pred, nx)*mask).sum(),
> outputs_info = None,
> non_sequences = [y, y_pred, mask],
> sequences = tensor.arange(n_labels))
> false_pos, _ = theano.scan(fn = lambda nx, y, y_pred, mask:\
> 
> (tensor.neq(y, nx)*tensor.eq(y_pred, nx)*mask).sum(),\
>   outputs_info = None,
>   non_sequences = [y, y_pred, mask],
>   sequences = tensor.arange(n_labels))
> false_neg, _ = theano.scan(fn = lambda nx, y, y_pred, mask: 
> (tensor.eq(y, nx)*tensor.neq(y_pred, nx)*mask).sum(),
>   outputs_info = None,
>   non_sequences = [y, y_pred, mask],
>   sequences = tensor.arange(n_labels))
>
> My task is sequence labeling, like multi_class classification, so, 
> r = ((true_pos+0.01) /( true_pos + false_pos+0.02)).mean()
> p = ((true_pos+0.01) /(true_pos + false_neg+0.02)).mean()
> f1 = 2*r*p/(r+p)
>
> 在 2015年5月12日星期二 UTC+8下午1:35:09,Mehdi写道:
>>
>> Hi,
>>
>> I adapted the code in this 
>> page and 
>> I need to calculate the precision, accuracy, and recall of my model. I 
>> assume np.mean(np.argmax(teY, axis=1) == predict(teX)) calculates the 
>> accuracy of the model. Not sure how to calculate the recall and percision. 
>> Any help is appreciated. 
>>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] TypeError: unhashable type: 'numpy.ndarray'

2016-07-27 Thread Beatriz G.
Hi guays, 

I am having a similar problem, I am trying to compile the following theano 
function:

layer3 = LogisticRegression(input=layer2.output, n_in=100, n_out=4)


salidas_capa3 = theano.function(
[index],
layer3.y_pred,
givens={
x: test_set_x[index * batch_size: (index + 1) * batch_size]
}
)



Which I call in the test part:


for i in range(n_test_batches):
test_losses = [test_model(i)]
y_pred_test = salidas_capa3[test_model(i)]
print y_pred_test
test_score = numpy.mean(test_losses)



And I am getting the next error:


Traceback (most recent call last):
  File "/home/beaa/Escritorio/Theano/Separando_Lenet.py", line 416, in 
evaluate_lenet5()
  File "/home/beaa/Escritorio/Theano/Separando_Lenet.py", line 392, in 
evaluate_lenet5
y_pred_test = salidas_capa3[test_model(i)]
  File 
"/home/beaa/.local/lib/python2.7/site-packages/theano/compile/function_module.py",
 line 545, in __getitem__
return self.value[item]
  File 
"/home/beaa/.local/lib/python2.7/site-packages/theano/compile/function_module.py",
 line 480, in __getitem__
s = finder[item]
TypeError: unhashable type: 'numpy.ndarray'




Anyone could guide me?


Here is my code:


class LeNetConvPoolLayer(object):
"""Pool Layer of a convolutional network """

def __init__(self, rng, input, filter_shape, image_shape, poolsize=(2, 2)):
"""
Allocate a LeNetConvPoolLayer with shared variable internal parameters.
:type rng: numpy.random.RandomState
:param rng: a random number generator used to initialize weights
:type input: theano.tensor.dtensor4
:param input: symbolic image tensor, of shape image_shape
:type filter_shape: tuple or list of length 4
:param filter_shape: (number of filters, num input feature maps,
  filter height, filter width)
:type image_shape: tuple or list of length 4
:param image_shape: (batch size, num input feature maps,
 image height, image width)
:type poolsize: tuple or list of length 2
:param poolsize: the downsampling (pooling) factor (#rows, #cols)
"""

assert image_shape[1] == filter_shape[1]#El 
numero de feature maps sea igual en ambas variables
self.input = input

# there are "num input feature maps * filter height * filter width"
# inputs to each hidden unit
fan_in = numpy.prod(filter_shape[1:])
#el numero de neuronas en la capa anterio
# each unit in the lower layer receives a gradient from:
# "num output feature maps * filter height * filter width" /
#   pooling size

fan_out = (filter_shape[0] * numpy.prod(filter_shape[2:]) /  
#El numero de neuronas de salida: numero de filtros*alto*ancho del 
filtro/poolsize(tam*tam)
   numpy.prod(poolsize))
# initialize weights with random weights

# initialize weights with random weights
W_bound = numpy.sqrt(6. / (fan_in + fan_out))
self.W = theano.shared(
numpy.asarray(
rng.uniform(low=-W_bound, high=W_bound, size=filter_shape),
# Se calcula asi el W_bound por la funcion  de activacion 
tangencial.
# Los pesos dependen del tamanyo del filtro(neuronas)

dtype=theano.config.floatX  # Para que sea valido en gpu
),
borrow=True
)

# the bias is a 1D tensor -- one bias per output feature map
b_values = numpy.zeros((filter_shape[0],), dtype=theano.config.floatX)
self.b = theano.shared(value=b_values, borrow=True)

# convolve input feature maps with filters
conv_out = conv.conv2d(
input=input,
filters=self.W,
filter_shape=filter_shape,
image_shape=image_shape
)

# downsample each feature map individually, using maxpooling
pooled_out = downsample.max_pool_2d(
input=conv_out,
ds=poolsize,
ignore_border=True
)

print pooled_out

# add the bias term. Since the bias is a vector (1D array), we first
# reshape it to a tensor of shape (1, n_filters, 1, 1). Each bias will
# thus be broadcasted across mini-batches and feature map
# width & height
self.output = theano.tensor.nnet.relu((pooled_out + 
self.b.dimshuffle('x', 0, 'x', 'x')) , alpha=0)


# store parameters of this layer
self.params = [self.W, self.b]

# keep track of model input
self.input = input

self.conv_out=conv_out
self.pooled_out=pooled_out

self.salidas_capa = [self.conv_out, self.pooled_out, self.output]



def evaluate_lenet5(learning_rate=0.001, n_epochs=2, nkerns=[48, 96], 
batch_size=20):
""" Demonstrates lenet on MNIST dataset

:type learning_rate: 

[theano-users] Re: get test labels

2016-07-26 Thread Beatriz G.
Hi everyone, I am trying to solve my problem and I would like to get y_pred 
from logistic redresion (logistic_sgd.py) when it is classifing test data. 
Here is my code:




class LeNetConvPoolLayer(object):
"""Pool Layer of a convolutional network """

def __init__(self, rng, input, filter_shape, image_shape, poolsize=(2, 2)):
"""
Allocate a LeNetConvPoolLayer with shared variable internal parameters.
:type rng: numpy.random.RandomState
:param rng: a random number generator used to initialize weights
:type input: theano.tensor.dtensor4
:param input: symbolic image tensor, of shape image_shape
:type filter_shape: tuple or list of length 4
:param filter_shape: (number of filters, num input feature maps,
  filter height, filter width)
:type image_shape: tuple or list of length 4
:param image_shape: (batch size, num input feature maps,
 image height, image width)
:type poolsize: tuple or list of length 2
:param poolsize: the downsampling (pooling) factor (#rows, #cols)
"""

assert image_shape[1] == filter_shape[1]#El 
numero de feature maps sea igual en ambas variables
self.input = input

# there are "num input feature maps * filter height * filter width"
# inputs to each hidden unit
fan_in = numpy.prod(filter_shape[1:])
#el numero de neuronas en la capa anterio
# each unit in the lower layer receives a gradient from:
# "num output feature maps * filter height * filter width" /
#   pooling size

fan_out = (filter_shape[0] * numpy.prod(filter_shape[2:]) /  
#El numero de neuronas de salida: numero de filtros*alto*ancho del 
filtro/poolsize(tam*tam)
   numpy.prod(poolsize))
# initialize weights with random weights

# initialize weights with random weights
W_bound = numpy.sqrt(6. / (fan_in + fan_out))
self.W = theano.shared(
numpy.asarray(
rng.uniform(low=-W_bound, high=W_bound, size=filter_shape),
# Se calcula asi el W_bound por la funcion  de activacion 
tangencial.
# Los pesos dependen del tamanyo del filtro(neuronas)

dtype=theano.config.floatX  # Para que sea valido en gpu
),
borrow=True
)

# the bias is a 1D tensor -- one bias per output feature map
b_values = numpy.zeros((filter_shape[0],), dtype=theano.config.floatX)
self.b = theano.shared(value=b_values, borrow=True)

# convolve input feature maps with filters
conv_out = conv.conv2d(
input=input,
filters=self.W,
filter_shape=filter_shape,
image_shape=image_shape
)

# downsample each feature map individually, using maxpooling
pooled_out = downsample.max_pool_2d(
input=conv_out,
ds=poolsize,
ignore_border=True
)

print pooled_out

# add the bias term. Since the bias is a vector (1D array), we first
# reshape it to a tensor of shape (1, n_filters, 1, 1). Each bias will
# thus be broadcasted across mini-batches and feature map
# width & height
self.output = theano.tensor.nnet.relu((pooled_out + 
self.b.dimshuffle('x', 0, 'x', 'x')) , alpha=0)


# store parameters of this layer
self.params = [self.W, self.b]

# keep track of model input
self.input = input

self.conv_out=conv_out
self.pooled_out=pooled_out

self.salidas_capa = [self.conv_out, self.pooled_out, self.output]



def evaluate_lenet5(learning_rate=0.001, n_epochs=10, nkerns=[48, 96], 
batch_size=20):
""" Demonstrates lenet on MNIST dataset

:type learning_rate: float
:param learning_rate: learning rate used (factor for the stochastic
  gradient)

:type n_epochs: int
:param n_epochs: maximal number of epochs to run the optimizer

:type dataset: string
:param dataset: path to the dataset used for training /testing (MNIST here)

:type nkerns: list of ints
:param nkerns: number of kernels on each layer
"""

rng = numpy.random.RandomState(2509)

train_set_x, test_set_x, train_set_y, test_set_y, valid_set_x, valid_set_y 
=  Load_casia_Data2()

train_set_x = theano.shared(numpy.array(train_set_x,  dtype='float64',))
test_set_x = theano.shared(numpy.array(test_set_x, dtype='float64'))
train_set_y = theano.shared(numpy.array(train_set_y,  dtype='int32'))
test_set_y = theano.shared(numpy.array(test_set_y, dtype='int32'))
valid_set_x = theano.shared(numpy.array(valid_set_x, dtype='float64'))
valid_set_y = theano.shared(numpy.array(valid_set_y, dtype='int32'))

print("n_batches:")

# compute