Hi,

I have a matrix whose entries I must raise to a certain power and then 
normalize by row. After I do that, when I pass some rows to 
numpy.random.choice, I get a ValueError: probabilities do not sum to 1.

I understand that floating point is not perfect, and my matrix is so large that 
I cannot use np.longdouble because I will run out of RAM.

As an example on a smaller matrix:

np.power(mymatrix, 10, out=mymatrix)
row_normalized = np.apply_along_axis(lambda x: x / np.sum(x), 1, mymatrix)
sums = row_normalized.sum(axis=1)
sums[np.where(sums != 1)]

array([ 0.99999994,  0.99999994,  1.00000012, ...,  0.99999994,
     0.99999994,  0.99999994], dtype=float32)

np.random.choice(range(row_normalized.shape[0]), 1, p=row_normalized[0, :])
…
ValueError: probabilities do not sum to 1


I also tried the normalize function in sklearn.preprocessing and have the same 
problem.

Is there a way to avoid this problem without having to make manual adjustments 
to get the row sums to = 1?

— Ryan
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to