wo, Suite 400 | Atlanta, GA
> 30305
>
>
>
>
> -Original Message-
> From: Matthieu Brucher [mailto:matthieu.bruc...@gmail.com]
> Sent: Thursday, December 17, 2015 7:56 AM
> To: scikit-learn-general@lists.sourceforge.net
> Subject: Re: [Scikit-learn-general] sklearn.p
-
From: Matthieu Brucher [mailto:matthieu.bruc...@gmail.com]
Sent: Thursday, December 17, 2015 7:56 AM
To: scikit-learn-general@lists.sourceforge.net
Subject: Re: [Scikit-learn-general] sklearn.preprocessing.normalize does not
sum to 1
The thing is that even if you did sum and divide by the sum
The thing is that even if you did sum and divide by the sum, summing
the results back may not lead to 1.0. This is always the "issue" in
floating point computation.
Cheers,
Matthieu
2015-12-17 8:26 GMT+01:00 Ryan R. Rosario :
> Hi,
>
> I have a very large dense numpy matrix. To avoid running out
Ryan,
Have you tried a small problem to see if the float32 datatype is causing you
problems? float64 is going to give 15-17 digits of precision, meaning you may
not get to the exact 1.0 representation, especially with float32.
I am not sure this will help you, but take a look at numpy.memma
Hm, since you have problems with memory already, the longdouble wouldn't be an
option I guess. However, what about using numpy.around to reduce the precision
by a few decimals?
Sent from my iPhone
> On Dec 17, 2015, at 8:26 AM, Ryan R. Rosario wrote:
>
> Hi,
>
> I have a very large dense n