Hi,

I am applying Monte Carlo for a problem involving mixed deterministic
and random values.  In order to avoid a lot of special handling and
corner cases, I am using using numpy arrays full of a single value to
represent the deterministic quantities.

Anyway, I found that the standard deviation turns out to be non-zero
when these deterministic sets take on large values, which is wrong.
This is due to machine precision loss.

It turns out to be fairly straightforward to check for this situation
upfront. See attached code. I've also shown a more accurate algorithm
for computing the mean, but it adds an additional multiplication for
every term in the sum, which is obviously undesirable from a
performance perspective. Would it make sense to automatically detect
the precision loss and use the more accurate approach when that is the
case?

If that seems ok, I can take a look at the numpy code, and submit a
patch.

Best wishes,
Mike

Attachment: mean-problem
Description: Binary data

_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to