I had a similar issue, and I thought of two ways to do it. One was to use 
scan, which allowed for dynamic batch size and faster compile time but 
likely increased run-time - the other was to use T.concatenate and a 
for-loop - this makes nodes proportional in count to your fixed batch size, 
but I've heard vague statements that this is typically faster in run-time. 
I ended up going with the latter:

mult_C = lambda ctr, k: cudnn.dnn_conv3d(img=ctr, kerns=k)

def mult_Cs(ctr,kerns):

return T.concatenate([mult_C(ctr[[i],:,:,:,:],kerns[[i],:,:,:,:]) for i in 
range(batch_size)], \

axis=0)


Though I wouldn't be surprised if there's a better way!

Best,
Michael

On Tuesday, December 13, 2016 at 1:32:35 PM UTC-5, Andrew Brock wrote:
>
> Hi guys,
>
> I'm doing some work where I'd like to have a data-dependent convolutional 
> layer, where a different filterbank is used on each element of a given 
> batch. This is easy when doing things with a single batch, but for 
> minibatches I'd end up with a 5D  Batch x Num_filters x Channels x H x W 
> tensor, and I can't think of an easy way to apply this on a 4D Batch x 
> Channel x H x W tensor. Is there some trick with backward passes, 
> dimshuffles and 3D reshapes, or something else that anyone knows of that 
> would give me an efficient way to do this?
>
> Thanks,
>
> Andy
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to