I have a network with multiple BiLSTMs and attention layers. I sorted the 
training data in mini-batches of varying lengths of inputs, however 
containing the same number of training examples. When I train the network 
from mini-batch sequenced from larger lengths to smaller ones, every thing 
works fine, but when I shuffle the mini batches for training I get the  
"pygpu.gpuarray.GpuArrayException: out of memory" error at arbitrary point 
of training. 
Has anyone else experienced this, if so I would very much like to know the 
remedy. We have GeForce GTX 1080 Ti GPUs with
11172MiB of memory.

thanks 

Narendra

-- 
 

*******************************************************************************

This e-mail and any of its attachments may contain Interactions Corporation 
proprietary information, which is privileged, confidential, or subject to 
copyright belonging to the Interactions Corporation. This e-mail is 
intended solely for the use of the individual or entity to which it is 
addressed. If you are not the intended recipient of this e-mail, you are 
hereby notified that any dissemination, distribution, copying, or action 
taken in relation to the contents of and attachments to this e-mail is 
strictly prohibited and may be unlawful. If you have received this e-mail 
in error, please notify the sender immediately and permanently delete the 
original and any copy of this e-mail and any printout. Thank You.  

******************************************************************************* 
 

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to