Here is a guide on how to deal with nan:

http://deeplearning.net/software/theano/tutorial/nan_tutorial.html

Fred

On Sat, Jul 9, 2016 at 4:34 PM, shashank gupta <[email protected]
> wrote:

> Yes, I tried batch GD and passed one example at a time. With that I am
> getting 1-2 NaNs which I handled with try-catch block. But the cost is not
> decreasing in that case. I am not sure if I have made some mistake while
> writing the model or something else.
>
>
> On Saturday, July 9, 2016 at 10:45:41 PM UTC+5:30, Dustin Ezell wrote:
>>
>> Very welcome. How large is the X you're passing in? try passing in just a
>> few rows and see if you still get NANs. You may need to use batch gradient
>> descent with smaller batches.
>>
>> On Friday, July 8, 2016 at 11:22:56 AM UTC-5, shashank gupta wrote:
>>>
>>> Thanks for replying. No I am running it on CPU (for validating the
>>> model).
>>>
>>> On Friday, July 8, 2016 at 9:35:58 PM UTC+5:30, Dustin Ezell wrote:
>>>>
>>>> Are you running this on GPU? How big is X? I had a NAN problem from a
>>>> cost function running batch gradient descent, that would only show up after
>>>> several batches. Best I can recon my GPU is running out of memory since I
>>>> can perform numerous iterations before running into issues.
>>>>
>>> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to