Some operations on GPU are not deterministic. I think some convolution 
operations and also reduction operations are two examples. See this thread 
for more info
https://groups.google.com/forum/#!searchin/theano-users/atomic$20add%7Csort:relevance/theano-users/g-BF6zwMirM/ojWzbUBPBwAJ

On Tuesday, July 18, 2017 at 1:20:41 PM UTC-7, Wenpeng Yin wrote:
>
> Hi guys,
>
> I have a long-term problem when running theano code in GPU: even I use two 
> command windows to run the same program (on the same GPU or different 
> GPUs), they show different performances. It's hard to say the difference is 
> small or big, depending on the task.  This makes difficult to judge a 
> program modification is better or worse.
>
> I can not find the problem, as I notice that I always use the same random 
> seed, for example "rng = numpy.random.RandomState(23455)",  whenever I 
> create parameters, so they are expected to repeat the process, right? 
>
> The only thing I can think about is that GPU uses 32 bits, not 64, this 
> will lose precision?
>
> Thanks for any hints.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to