[ 
https://issues.apache.org/jira/browse/SPARK-9096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14732358#comment-14732358
 ] 

Alexey Pechorin commented on SPARK-9096:
----------------------------------------

This problem has happened to me as well, while using sample/subtract on an RDD 
of LabeledPoint, which encompassed a sparse vector (and a 0/1 double label), so 
sparse vector won't help here.
In my opinion, it's conceptually wrong for a hash function not to be affected 
by all of the object's fields, and it introduces an opportunity for a lot of 
other things to go wrong with partitioning while using Vector, which might be 
pretty hard to debug in a complex application. In addition, this might break 
performance on existing Spark code, e.g. Spark ML code which uses 
sample/subtract on sparse data, like in [Spark MLlib SVM 
example|http://spark.apache.org/docs/latest/mllib-linear-methods.html#linear-support-vector-machines-svms]
 in MLlib documentation, for instance, so it's not entirely backward compatible.
I would suggest devising a way to be able to choose whether to optimize the 
Vector.hashCode or not to, perhaps with an optional constructor parameter?

> Unevenly distributed task loads after using JavaRDD.subtract()
> --------------------------------------------------------------
>
>                 Key: SPARK-9096
>                 URL: https://issues.apache.org/jira/browse/SPARK-9096
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.4.0, 1.4.1
>            Reporter: Gisle Ytrestøl
>            Priority: Minor
>         Attachments: ReproduceBug.java, hanging-one-task.jpg, 
> reproduce.1.3.1.log.gz, reproduce.1.4.1.log.gz
>
>
> When using JavaRDD.subtract(), it seems that the tasks are unevenly 
> distributed in the the following operations on the new JavaRDD which is 
> created by "subtract". The result is that in the following operation on the 
> new JavaRDD, a few tasks process almost all the data, and these tasks will 
> take a long time to finish. 
> I've reproduced this bug in the attached Java file, which I submit with 
> spark-submit. 
> The logs for 1.3.1 and 1.4.1 are attached. In 1.4.1, we see that a few tasks 
> in the count job takes a lot of time:
> 15/07/16 09:13:17 INFO TaskSetManager: Finished task 1459.0 in stage 2.0 (TID 
> 4659) in 708 ms on 148.251.190.217 (1597/1600)
> 15/07/16 09:13:17 INFO TaskSetManager: Finished task 1586.0 in stage 2.0 (TID 
> 4786) in 772 ms on 148.251.190.217 (1598/1600)
> 15/07/16 09:17:51 INFO TaskSetManager: Finished task 1382.0 in stage 2.0 (TID 
> 4582) in 275019 ms on 148.251.190.217 (1599/1600)
> 15/07/16 09:20:02 INFO TaskSetManager: Finished task 1230.0 in stage 2.0 (TID 
> 4430) in 407020 ms on 148.251.190.217 (1600/1600)
> 15/07/16 09:20:02 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks 
> have all completed, from pool 
> 15/07/16 09:20:02 INFO DAGScheduler: ResultStage 2 (count at 
> ReproduceBug.java:56) finished in 420.024 s
> 15/07/16 09:20:02 INFO DAGScheduler: Job 0 finished: count at 
> ReproduceBug.java:56, took 442.941395 s
> In comparison, all tasks are more or less equal in size when running the same 
> application in Spark 1.3.1. In overall, this
> attached application (ReproduceBug.java) takes about 7 minutes on Spark 
> 1.4.1, and completes in roughly 30 seconds in Spark 1.3.1. 
> Spark 1.4.0 behaves similar to Spark 1.4.1 wrt this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to