I am writing an autoencoder model in using Theano, (I am very new to
Theano). The cost function has a sparsity constraint. The KL divergent
function produces NaN values in the array, when I sum the array to add it
to the overall cost it gives a NaN value. Is there any way to get around
this problem.
KL = rho * (T.log(rho/rho_hat)) + (1 - rho) * (T.log((1 - rho)/(1 -
rho_hat)))
# sparsity cost
SPcost = beta * KL.nansum()
# the loss function
loss = T.nnet.categorical_crossentropy(y_hat, y).mean() + loss_reg
I am trying to debug using a test function
test=theano.function([X], SPcost)
test(train_X)
SPcost should give me a single scalar value, instead it shows array(nan)
I have tried to use numpy nansum() but that gives me an error. What is the
correct way of summing the array with the NaN values? Any suggestion would
be much appreciated.
--
---
You received this message because you are subscribed to the Google Groups
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.