[theano-users] Re: get test labels

2016-08-03 Thread Beatriz G.
It wooeks!!!

thank you, thank you very much!!!

El miércoles, 3 de agosto de 2016, 3:38:36 (UTC+2), Jesse Livezey escribió:
>
> I just changed
>
> salidas_capa3[test_model(i)]
>
> to
>
> salidas_capa3(i)
>
>
> the function salidas_capa3 expects a batch index as an argument.
>
> On Sunday, July 31, 2016 at 3:16:45 PM UTC-4, Beatriz G. wrote:
>>
>> Is it not what I have given to salidas_capa3?
>>
>> I am really thankful for your help, really, really thankful.
>>
>>
>> El viernes, 29 de julio de 2016, 4:00:51 (UTC+2), Jesse Livezey escribió:
>>>
>>> I think you just want to do
>>>
>>> for i in range(n_test_batches):
>>> test_losses = [test_model(i)]
>>> y_pred_test = salidas_capa3(i)
>>> print y_pred_test
>>>
>>>
>>> The salidas_capa3 function expects a minibatch index as an argument.
>>>
>>> On Wednesday, July 27, 2016 at 11:27:08 PM UTC-7, Beatriz G. wrote:

 I am not able of extract the value of that function at that point, I 
 have debugged and I I have gotten the results of test_model in the 
 attached 
 pic.

 Thank you for your help.



 What is the value of test_model(i) at that point? I think it should be 
> an array of indices.
>
> On Wednesday, July 27, 2016 at 1:52:27 AM UTC-7, Beatriz G. wrote:
>>
>> Hi Jesse, thank you for your reply.
>>
>> I have tried to use it when I test:
>>
>> #Aqui se tiene que cargar la red
>>
>> layer0.W.set_value(w0_test)
>> layer0.b.set_value(b0_test)
>>
>> layer1.W.set_value(w1_test)
>> layer1.b.set_value(b1_test)
>>
>> layer2.W.set_value(w2_test)
>> layer2.b.set_value(b2_test)
>>
>> # test it on the test set
>> for i in range(n_test_batches):
>> test_losses = [test_model(i)]
>> y_pred_test = salidas_capa3[test_model(i)]
>> print y_pred_test
>> test_score = numpy.mean(test_losses)
>>
>> print((' test error of best model %f %%') % (test_score * 100.))
>>
>>
>>
>> but I get the following error:
>>
>>
>> Traceback (most recent call last):
>>   File "/home/beaa/Escritorio/Theano/Separando_Lenet.py", line 414, in 
>> 
>> evaluate_lenet5()
>>   File "/home/beaa/Escritorio/Theano/Separando_Lenet.py", line 390, in 
>> evaluate_lenet5
>> y_pred_test = salidas_capa3[test_model(i)]
>>   File 
>> "/home/beaa/.local/lib/python2.7/site-packages/theano/compile/function_module.py",
>>  line 545, in __getitem__
>> return self.value[item]
>>   File 
>> "/home/beaa/.local/lib/python2.7/site-packages/theano/compile/function_module.py",
>>  line 480, in __getitem__
>> s = finder[item]
>> TypeError: unhashable type: 'numpy.ndarray'
>>
>>
>>
>> and I do not know what produces it.
>>
>>
>> Regards
>>
>>
>> El miércoles, 27 de julio de 2016, 2:29:24 (UTC+2), Jesse Livezey 
>> escribió:
>>>
>>> You should be able to use this function to output y_pred
>>>
>>> salidas_capa3 = theano.function(
>>> [index],
>>> layer3.y_pred,
>>> givens={
>>> x: test_set_x[index * batch_size: (index + 1) * batch_size],
>>> }
>>> )
>>>
>>>
>>> On Monday, July 25, 2016 at 3:09:09 AM UTC-7, Beatriz G. wrote:

 Hi, anyone knows how to get the test labels that the classifier has 
 given to the data? 

 I would like to extrat the data that has not been well classified.

 Regards.

>>>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: get test labels

2016-08-02 Thread Jesse Livezey
I just changed

salidas_capa3[test_model(i)]

to

salidas_capa3(i)


the function salidas_capa3 expects a batch index as an argument.

On Sunday, July 31, 2016 at 3:16:45 PM UTC-4, Beatriz G. wrote:
>
> Is it not what I have given to salidas_capa3?
>
> I am really thankful for your help, really, really thankful.
>
>
> El viernes, 29 de julio de 2016, 4:00:51 (UTC+2), Jesse Livezey escribió:
>>
>> I think you just want to do
>>
>> for i in range(n_test_batches):
>> test_losses = [test_model(i)]
>> y_pred_test = salidas_capa3(i)
>> print y_pred_test
>>
>>
>> The salidas_capa3 function expects a minibatch index as an argument.
>>
>> On Wednesday, July 27, 2016 at 11:27:08 PM UTC-7, Beatriz G. wrote:
>>>
>>> I am not able of extract the value of that function at that point, I 
>>> have debugged and I I have gotten the results of test_model in the attached 
>>> pic.
>>>
>>> Thank you for your help.
>>>
>>>
>>>
>>> What is the value of test_model(i) at that point? I think it should be 
 an array of indices.

 On Wednesday, July 27, 2016 at 1:52:27 AM UTC-7, Beatriz G. wrote:
>
> Hi Jesse, thank you for your reply.
>
> I have tried to use it when I test:
>
> #Aqui se tiene que cargar la red
>
> layer0.W.set_value(w0_test)
> layer0.b.set_value(b0_test)
>
> layer1.W.set_value(w1_test)
> layer1.b.set_value(b1_test)
>
> layer2.W.set_value(w2_test)
> layer2.b.set_value(b2_test)
>
> # test it on the test set
> for i in range(n_test_batches):
> test_losses = [test_model(i)]
> y_pred_test = salidas_capa3[test_model(i)]
> print y_pred_test
> test_score = numpy.mean(test_losses)
>
> print((' test error of best model %f %%') % (test_score * 100.))
>
>
>
> but I get the following error:
>
>
> Traceback (most recent call last):
>   File "/home/beaa/Escritorio/Theano/Separando_Lenet.py", line 414, in 
> 
> evaluate_lenet5()
>   File "/home/beaa/Escritorio/Theano/Separando_Lenet.py", line 390, in 
> evaluate_lenet5
> y_pred_test = salidas_capa3[test_model(i)]
>   File 
> "/home/beaa/.local/lib/python2.7/site-packages/theano/compile/function_module.py",
>  line 545, in __getitem__
> return self.value[item]
>   File 
> "/home/beaa/.local/lib/python2.7/site-packages/theano/compile/function_module.py",
>  line 480, in __getitem__
> s = finder[item]
> TypeError: unhashable type: 'numpy.ndarray'
>
>
>
> and I do not know what produces it.
>
>
> Regards
>
>
> El miércoles, 27 de julio de 2016, 2:29:24 (UTC+2), Jesse Livezey 
> escribió:
>>
>> You should be able to use this function to output y_pred
>>
>> salidas_capa3 = theano.function(
>> [index],
>> layer3.y_pred,
>> givens={
>> x: test_set_x[index * batch_size: (index + 1) * batch_size],
>> }
>> )
>>
>>
>> On Monday, July 25, 2016 at 3:09:09 AM UTC-7, Beatriz G. wrote:
>>>
>>> Hi, anyone knows how to get the test labels that the classifier has 
>>> given to the data? 
>>>
>>> I would like to extrat the data that has not been well classified.
>>>
>>> Regards.
>>>
>>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: get test labels

2016-07-26 Thread Jesse Livezey
You should be able to use this function to output y_pred

salidas_capa3 = theano.function(
[index],
layer3.y_pred,
givens={
x: test_set_x[index * batch_size: (index + 1) * batch_size],
}
)


On Monday, July 25, 2016 at 3:09:09 AM UTC-7, Beatriz G. wrote:
>
> Hi, anyone knows how to get the test labels that the classifier has given 
> to the data? 
>
> I would like to extrat the data that has not been well classified.
>
> Regards.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: get test labels

2016-07-26 Thread Beatriz G.
Hi everyone, I am trying to solve my problem and I would like to get y_pred 
from logistic redresion (logistic_sgd.py) when it is classifing test data. 
Here is my code:




class LeNetConvPoolLayer(object):
"""Pool Layer of a convolutional network """

def __init__(self, rng, input, filter_shape, image_shape, poolsize=(2, 2)):
"""
Allocate a LeNetConvPoolLayer with shared variable internal parameters.
:type rng: numpy.random.RandomState
:param rng: a random number generator used to initialize weights
:type input: theano.tensor.dtensor4
:param input: symbolic image tensor, of shape image_shape
:type filter_shape: tuple or list of length 4
:param filter_shape: (number of filters, num input feature maps,
  filter height, filter width)
:type image_shape: tuple or list of length 4
:param image_shape: (batch size, num input feature maps,
 image height, image width)
:type poolsize: tuple or list of length 2
:param poolsize: the downsampling (pooling) factor (#rows, #cols)
"""

assert image_shape[1] == filter_shape[1]#El 
numero de feature maps sea igual en ambas variables
self.input = input

# there are "num input feature maps * filter height * filter width"
# inputs to each hidden unit
fan_in = numpy.prod(filter_shape[1:])
#el numero de neuronas en la capa anterio
# each unit in the lower layer receives a gradient from:
# "num output feature maps * filter height * filter width" /
#   pooling size

fan_out = (filter_shape[0] * numpy.prod(filter_shape[2:]) /  
#El numero de neuronas de salida: numero de filtros*alto*ancho del 
filtro/poolsize(tam*tam)
   numpy.prod(poolsize))
# initialize weights with random weights

# initialize weights with random weights
W_bound = numpy.sqrt(6. / (fan_in + fan_out))
self.W = theano.shared(
numpy.asarray(
rng.uniform(low=-W_bound, high=W_bound, size=filter_shape),
# Se calcula asi el W_bound por la funcion  de activacion 
tangencial.
# Los pesos dependen del tamanyo del filtro(neuronas)

dtype=theano.config.floatX  # Para que sea valido en gpu
),
borrow=True
)

# the bias is a 1D tensor -- one bias per output feature map
b_values = numpy.zeros((filter_shape[0],), dtype=theano.config.floatX)
self.b = theano.shared(value=b_values, borrow=True)

# convolve input feature maps with filters
conv_out = conv.conv2d(
input=input,
filters=self.W,
filter_shape=filter_shape,
image_shape=image_shape
)

# downsample each feature map individually, using maxpooling
pooled_out = downsample.max_pool_2d(
input=conv_out,
ds=poolsize,
ignore_border=True
)

print pooled_out

# add the bias term. Since the bias is a vector (1D array), we first
# reshape it to a tensor of shape (1, n_filters, 1, 1). Each bias will
# thus be broadcasted across mini-batches and feature map
# width & height
self.output = theano.tensor.nnet.relu((pooled_out + 
self.b.dimshuffle('x', 0, 'x', 'x')) , alpha=0)


# store parameters of this layer
self.params = [self.W, self.b]

# keep track of model input
self.input = input

self.conv_out=conv_out
self.pooled_out=pooled_out

self.salidas_capa = [self.conv_out, self.pooled_out, self.output]



def evaluate_lenet5(learning_rate=0.001, n_epochs=10, nkerns=[48, 96], 
batch_size=20):
""" Demonstrates lenet on MNIST dataset

:type learning_rate: float
:param learning_rate: learning rate used (factor for the stochastic
  gradient)

:type n_epochs: int
:param n_epochs: maximal number of epochs to run the optimizer

:type dataset: string
:param dataset: path to the dataset used for training /testing (MNIST here)

:type nkerns: list of ints
:param nkerns: number of kernels on each layer
"""

rng = numpy.random.RandomState(2509)

train_set_x, test_set_x, train_set_y, test_set_y, valid_set_x, valid_set_y 
=  Load_casia_Data2()

train_set_x = theano.shared(numpy.array(train_set_x,  dtype='float64',))
test_set_x = theano.shared(numpy.array(test_set_x, dtype='float64'))
train_set_y = theano.shared(numpy.array(train_set_y,  dtype='int32'))
test_set_y = theano.shared(numpy.array(test_set_y, dtype='int32'))
valid_set_x = theano.shared(numpy.array(valid_set_x, dtype='float64'))
valid_set_y = theano.shared(numpy.array(valid_set_y, dtype='int32'))

print("n_batches:")

# compute