[
https://issues.apache.org/jira/browse/SPARK-17130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15426780#comment-15426780
]
Jon Zhong commented on SPARK-17130:
-----------------------------------
Yep, I wrote a comment there but I deleted since I'm not sure whether they are
fixing this problem together.
The problem mentioned at SPARK-16965 is more about negative indices. Are they
also concerning about unordered indices?
> SparseVectors.apply and SparseVectors.toArray have different returns when
> creating with a illegal indices
> ---------------------------------------------------------------------------------------------------------
>
> Key: SPARK-17130
> URL: https://issues.apache.org/jira/browse/SPARK-17130
> Project: Spark
> Issue Type: Bug
> Components: ML, MLlib
> Affects Versions: 1.6.2, 2.0.0
> Environment: spark 1.6.1 + scala
> Reporter: Jon Zhong
> Priority: Minor
>
> One of my colleagues ran into a bug of SparseVectors. He called the
> Vectors.sparse(size: Int, indices: Array[Int], values: Array[Double]) without
> noticing that the indices are assumed to be ordered.
> The vector he created has all value of 0.0 (without any warning), if we try
> to get value via apply method. However, SparseVector.toArray will generates a
> array using a method that is order insensitive. Hence, you will get a 0.0
> when you call apply method, while you can get correct result using toArray or
> toDense method. The result of SparseVector.toArray is actually misleading.
> It could be safer if there is a validation of indices in the constructor or
> at least make the returns of apply method and toArray method the same.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]