[ 
https://issues.apache.org/jira/browse/SPARK-16426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15371664#comment-15371664
 ] 

Apache Spark commented on SPARK-16426:
--------------------------------------

User 'neggert' has created a pull request for this issue:
https://github.com/apache/spark/pull/14140

> IsotonicRegression produces NaNs with certain data
> --------------------------------------------------
>
>                 Key: SPARK-16426
>                 URL: https://issues.apache.org/jira/browse/SPARK-16426
>             Project: Spark
>          Issue Type: Bug
>          Components: MLlib
>    Affects Versions: 1.3.1, 1.4.1, 1.5.2, 1.6.2
>            Reporter: Nic Eggert
>
> {code}
> val r = sc.parallelize(Seq[(Double, Double, Double)]((2, 1, 1), (1, 1, 1), 
> (0, 2, 1), (1, 2, 1), (0.5, 3, 1), (0, 3, 1)), 2)
> val i = new IsotonicRegression().run(r)
> scala> i.predict(3.0)
> res12: Double = NaN
> scala> i.predictions
> res13: Array[Double] = Array(0.75, 0.75, NaN, NaN)
> {code}
> I believe I understand the problem so I'll submit a PR shortly.
> The problem happens when rows with the same feature value but different 
> labels end up on different partitions. The merge function in 
> poolAdjacentViolators introduces 0-weight points to be used for linear 
> interpolation. This works fine, as long as they are always next to a 
> non-0-weight point, but in the above case, you can end up with two 0-weight 
> points  with the same feature value, which end up next to each other in the 
> final PAV step. If these points are pooled, it creates a NaN.
> One solution to this is to ensure that the all points with identical feature 
> values end up on the same partition. This is the solution I intend to submit 
> a PR for. Another option would be to try to get rid of the 0-weight points, 
> but that seems trickier to me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to