wo, Suite 400 | Atlanta, GA
> 30305
>
>
>
>
> -Original Message-
> From: Matthieu Brucher [mailto:matthieu.bruc...@gmail.com]
> Sent: Thursday, December 17, 2015 7:56 AM
> To: scikit-learn-general@lists.sourceforge.net
> Subject: Re: [Scikit-learn-general] sklearn.p
-
From: Matthieu Brucher [mailto:matthieu.bruc...@gmail.com]
Sent: Thursday, December 17, 2015 7:56 AM
To: scikit-learn-general@lists.sourceforge.net
Subject: Re: [Scikit-learn-general] sklearn.preprocessing.normalize does not
sum to 1
The thing is that even if you did sum and divide by the sum
The thing is that even if you did sum and divide by the sum, summing
the results back may not lead to 1.0. This is always the "issue" in
floating point computation.
Cheers,
Matthieu
2015-12-17 8:26 GMT+01:00 Ryan R. Rosario :
> Hi,
>
> I have a very large dense numpy matrix. To avoid running out
re%20Images/sig%20Youtube.jpeg]
<https://www.youtube.com/user/NexidiaTV>
-Original Message-
From: Ryan R. Rosario [mailto:r...@bytemining.com]
Sent: Thursday, December 17, 2015 2:26 AM
To: Scikit-learn-general@lists.sourceforge.net
Subject: [Scikit-learn-general] sklearn.preprocessin
Hm, since you have problems with memory already, the longdouble wouldn't be an
option I guess. However, what about using numpy.around to reduce the precision
by a few decimals?
Sent from my iPhone
> On Dec 17, 2015, at 8:26 AM, Ryan R. Rosario wrote:
>
> Hi,
>
> I have a very large dense n
Hi,
I have a very large dense numpy matrix. To avoid running out of RAM, I use
np.float32 as the dtype instead of the default np.float64 on my system.
When I do an L1 normalization of the rows (axis=1) in my matrix in-place
(copy=False), I frequently get rows that do not sum to 1. Since these