On Thu, May 8, 2008 at 11:18 AM, David Cournapeau <[EMAIL PROTECTED]>
wrote:

> On Fri, May 9, 2008 at 2:06 AM, Nadav Horesh <[EMAIL PROTECTED]>
> wrote:
> > Is the 80 bits float (float96 on IA32, float128 on AMD64) isn't enough?
> It has a 64 bits mantissa and can represent numbers up to nearly 1E(+-)5000.
>
> It only make the problem happen later, I think. If you have a GMM with
> million of samples of high dimension with many clusters, any "linear"
> representation will fail I think. In a sense, the IEEE format is not
> adequate for that kind of computation.
>

David, what you are using is a log(log(x)) representation internally. IEEE
is *not* linear, it is logarithmic.

Chuck
_______________________________________________
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to