[
https://issues.apache.org/jira/browse/SPARK-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeremy Freeman updated SPARK-5089:
----------------------------------
Description:
Prior to performing many MLlib operations in PySpark (e.g. KMeans), data are
automatically converted to {{DenseVectors}}. If the data are numpy arrays with
dtype {{float64}} this works. If data are numpy arrays with lower precision
(e.g. {{float16}} or {{float32}}), they should be upcast to {{float64}}, but
due to a small bug in this line this currently doesn't happen (casting is not
inplace).
{code:none}
if ar.dtype != np.float64:
ar.astype(np.float64)
{code}
Non-float64 values are in turn mangled during SerDe. This can have significant
consequences. For example, the following yields confusing and erroneous results:
{code:none}
from numpy import random
from pyspark.mllib.clustering import KMeans
data = sc.parallelize(random.randn(100,10).astype('float32'))
model = KMeans.train(data, k=3)
len(model.centers[0])
>> 5 # should be 10!
{code}
But this works fine:
{code:none}
data = sc.parallelize(random.randn(100,10).astype('float64'))
model = KMeans.train(data, k=3)
len(model.centers[0])
>> 10 # this is correct
{code}
The fix is trivial, I'll submit a PR shortly.
was:
Prior to performing many MLlib operations in PySpark (e.g. KMeans), data are
automatically converted to {{DenseVectors}}. If the data are numpy arrays with
dtype {{float64}} this works. If data are numpy arrays with lower precision
(e.g. {{float16}} or {{float32}}), they should be upcast to {{float64}}, but
due to a small bug in this line this currently doesn't happen (casting is not
inplace).
{code:python}
if ar.dtype != np.float64:
ar.astype(np.float64)
{code}
Non-float64 values are in turn mangled during SerDe. This can have significant
consequences. For example, the following yields confusing and erroneous results:
{code:python}
from numpy import random
from pyspark.mllib.clustering import KMeans
data = sc.parallelize(random.randn(100,10).astype('float32'))
model = KMeans.train(data, k=3)
len(model.centers[0])
>> 5 # should be 10!
{code}
But this works fine:
{code:python}
data = sc.parallelize(random.randn(100,10).astype('float64'))
model = KMeans.train(data, k=3)
len(model.centers[0])
>> 10 # this is correct
{code}
The fix is trivial, I'll submit a PR shortly.
> Vector conversion broken for non-float64 arrays
> -----------------------------------------------
>
> Key: SPARK-5089
> URL: https://issues.apache.org/jira/browse/SPARK-5089
> Project: Spark
> Issue Type: Bug
> Components: MLlib, PySpark
> Affects Versions: 1.2.0
> Reporter: Jeremy Freeman
>
> Prior to performing many MLlib operations in PySpark (e.g. KMeans), data are
> automatically converted to {{DenseVectors}}. If the data are numpy arrays
> with dtype {{float64}} this works. If data are numpy arrays with lower
> precision (e.g. {{float16}} or {{float32}}), they should be upcast to
> {{float64}}, but due to a small bug in this line this currently doesn't
> happen (casting is not inplace).
> {code:none}
> if ar.dtype != np.float64:
> ar.astype(np.float64)
> {code}
>
> Non-float64 values are in turn mangled during SerDe. This can have
> significant consequences. For example, the following yields confusing and
> erroneous results:
> {code:none}
> from numpy import random
> from pyspark.mllib.clustering import KMeans
> data = sc.parallelize(random.randn(100,10).astype('float32'))
> model = KMeans.train(data, k=3)
> len(model.centers[0])
> >> 5 # should be 10!
> {code}
> But this works fine:
> {code:none}
> data = sc.parallelize(random.randn(100,10).astype('float64'))
> model = KMeans.train(data, k=3)
> len(model.centers[0])
> >> 10 # this is correct
> {code}
> The fix is trivial, I'll submit a PR shortly.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]