Hi,

I'm implementing an attention LSTM (with Lasagne).

Each output Yi is generated by the attention of the input sequence N 
concatenated with previous state Ci-1.

Meaning that the number of steps for my scan is predetermined (lets say 
10), so there is no sequence to pass to scan.

The problem is overloading the scan operation by passing the input (shape = 
(batch, 8700, 32)) as non_sequence.

It slows down the running time for by approximately 100 times.

Is passing large non_sequence tensor for scan really suppose to slow it 
down so mutch?

Trying to use the global "input" inside the scan fn function instead of 
passing it just made compilation time so long that i gave up.

Any ideas how to overcome this issue?

Best,

Shir

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to