Hi,

I just found that using dot instead of sum in numpy gives me better results in terms of precision loss. For example, I optimized a function with scipy.optimize.fmin_bfgs. For the return value for the function, I tried the following two things:

sum(Xb) - sum(denominator)

and

dot(ones(Xb.shape), Xb) - dot(ones(denominator.shape), denominator)

Both of them are supposed to yield the same thing. But the first one gave me -589112.30492110562 and the second one gave me -589112.30492110678.

In addition, with the routine using sum, the optimizer gave me "Warning: Desired error not necessarily achieved due to precision loss." With the routine with dot, the optimizer gave me "Optimization terminated successfully."

I checked the gradient value as well (I provided analytical gradient) and gradient was smaller in the dot case as well. (Of course, the the magnitude was e-5 to e-6, but still)

I was wondering if this is well-known fact and I'm supposed to use dot instead of sum whenever possible. 

It would be great if someone could let me know why this happens.

Thank you,
Joon

_______________________________________________
NumPy-Discussion mailing list
[email protected]
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to