[theano-users] theano.scan: ValueError: length not known & Question regarding sequences and previous values

2017-07-21 Thread beja . 65536
Hi everybody,
I am trying to approximate a function that consist of multiple overlapping 
Gaussian kernels. The calculation should be done in the scan function that 
iterates over the Gaussian kernels. At this point I got stuck with two 
problems.

1. I have tried to calculate the sum of all kernels for each input by using 
the previous result. However, the return value is not the sum of all 
kernels for a given input but something else. I was not able to figure it 
out yet. See attached file scan_sum_kernel.py

2. I would like to define the input vector and the kernel parameters 
(height, mean, variance) as NumPy arrays. Therefore, I added the ‘givens’ 
parameter to the function. However, this leads to the Value Error: "length 
not known:  [id A]" See attached file 
scan_shared_variable.py

Any hint and advice how to go on and what to try next is much appreciated!
Below the code that is working so far:
x =np.arange(start=1,stop=100,step=1)
xs =   T.dscalar('xs')
height =   T.dvector('height')
mean = T.dvector('mean')
variance = T.dvector('variance')
bias = T.dvector('bias')

def gaussian(height, mean, variance, bias, x):
return (height * \
   T.exp(-(T.sqr( x-mean) / \
  (2*variance + bias

gaus_dist, updates = theano.scan(
fn=gaussian,
sequences=[height, mean, variance, bias],
non_sequences=[xs]
)

get_gaus = theano.function(inputs=[xs, height, mean, variance, bias],
   outputs=gaus_dist)

gaus3 = np.array([])
for xs in x:
gaus1 = get_gaus(xs, [2,10,11], [10,35,64], [2,12,22], [2, 3, 4])
gaus2=0
for gaus_tmp in gaus1:
gaus2=gaus2+gaus_tmp
gaus3 = np.append(gaus3,gaus2)

%matplotlib inline
plt.plot(gaus3)

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

# coding: utf-8

# In[4]:


import theano.tensor as T
import theano
import numpy as np
import matplotlib.pyplot as plt

x =np.arange(start=1,stop=100,step=1)
xs =   T.dscalar('xs')
height =   T.dvector('height')
mean = T.dvector('mean')
variance = T.dvector('variance')
bias = T.dvector('bias')

def gaussian(height, mean, variance, bias, last_ret, x):
tmp = (height *T.exp(-(T.sqr( x-mean) /   (2*variance + bias
return tmp + last_ret

gaus_dist, updates = theano.scan(
fn=gaussian,
sequences=[height, mean, variance, bias],
outputs_info=[np.float64(0)],
non_sequences=[xs]
)

get_gaus = theano.function(inputs=[xs, height, mean, variance, bias],
   outputs=gaus_dist)

gaus3 = np.array([])
for xs in x:
gaus1 = get_gaus(xs, [2,10,11], [10,35,64], [2,12,22], [0, 0, 0])
gaus2=0
for gaus_tmp in gaus1:
gaus2=gaus2+gaus_tmp
gaus3 = np.append(gaus3,gaus2)

get_ipython().magic('matplotlib inline')
plt.plot(gaus3)


# In[ ]:





# coding: utf-8

# In[1]:


import theano.tensor as T
import theano
import numpy as np
import matplotlib.pyplot as plt

x =  np.arange(start=1,stop=100,step=1)
xs = theano.shared(x)
height_np =  np.array([2,10,11])
height = theano.shared(height_np)
mean_np =np.array([10,35,64])
mean =   theano.shared(mean_np)
variance_np= np.array([2,12,22])
variance =   theano.shared(variance_np)
bias_np =np.array([2,2,4])
bias =   theano.shared(bias_np)

def gaussian(height, mean, variance, bias, x):
return (height *T.exp(-(T.sqr( x-mean) /   (2*variance + bias

gaus_dist, updates = theano.scan(
fn=gaussian,
sequences=[height, mean, variance, bias],
non_sequences=[xs],
)

get_gaus = theano.function(inputs=[xs],
   outputs=gaus_dist,
   givens=[height, mean, variance, bias])

gaus3 = np.array([])
for xs in x:
gaus1 = get_gaus(xs, height, mean, variance, bias)
gaus2=0
for gaus_tmp in gaus1:
gaus2=gaus2+gaus_tmp
gaus3 = np.append(gaus3,gaus2)

get_ipython().magic('matplotlib inline')
plt.plot(gaus3)


# In[ ]:






[theano-users] Scan checkpointing - what exactly theano stores?

2017-07-21 Thread Alexander Botev
So the scan checkpointing seems very ineteresting from the prespective that 
it can be used for things like learning-to-learn.
However, my question is can we tell Theano which part of each N-th 
iteration it to store and which not? For instance in the learning-to-learn 
framework where we unroll SGD
the optimal would be to store only the "updated" parameters which get pass 
to the next time step, rather than the whole computation. Is it possible to 
achieve something like that? 

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.