[jira] [Commented] (SPARK-22465) Cogroup of two disproportionate RDDs could lead into 2G limit BUG
[ https://issues.apache.org/jira/browse/SPARK-22465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304569#comment-16304569 ] Apache Spark commented on SPARK-22465: -- User 'jiangxb1987' has created a pull request for this issue: https://github.com/apache/spark/pull/20091 > Cogroup of two disproportionate RDDs could lead into 2G limit BUG > - > > Key: SPARK-22465 > URL: https://issues.apache.org/jira/browse/SPARK-22465 > Project: Spark > Issue Type: Bug > Components: Spark Core >Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 1.2.0, 1.2.1, 1.2.2, > 1.3.0, 1.3.1, 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.5.2, 1.6.0, 1.6.1, 1.6.2, 1.6.3, > 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1, 2.1.2, 2.2.0 >Reporter: Amit Kumar >Priority: Critical > Fix For: 2.3.0 > > > While running my spark pipeline, it failed with the following exception > {noformat} > 2017-11-03 04:49:09,776 [Executor task launch worker for task 58670] ERROR > org.apache.spark.executor.Executor - Exception in task 630.0 in stage 28.0 > (TID 58670) > java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE > at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:869) > at > org.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:103) > at > org.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:91) > at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1303) > at org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:105) > at > org.apache.spark.storage.BlockManager.getLocalValues(BlockManager.scala:469) > at > org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:705) > at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:334) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:285) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) > at org.apache.spark.scheduler.Task.run(Task.scala:99) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:324) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {noformat} > After debugging I found that the issue lies with how spark handles cogroup of > two RDDs. > Here is the relevant code from apache spark > {noformat} > /** >* For each key k in `this` or `other`, return a resulting RDD that > contains a tuple with the >* list of values for that key in `this` as well as `other`. >*/ > def cogroup[W](other: RDD[(K, W)]): RDD[(K, (Iterable[V], Iterable[W]))] = > self.withScope { > cogroup(other, defaultPartitioner(self, other)) > } > /** >* Choose a partitioner to use for a cogroup-like operation between a > number of RDDs. >* >* If any of the RDDs already has a partitioner, choose that one. >* >* Otherwise, we use a default HashPartitioner. For the number of > partitions, if >* spark.default.parallelism is set, then we'll use the value from > SparkContext >* defaultParallelism, otherwise we'll use the max number of upstream > partitions. >* >* Unless spark.default.parallelism is set, the number of partitions will > be the >* same as the number of partitions in the largest upstream RDD, as this > should >* be least likely to cause out-of-memory errors. >* >* We use two method parameters (rdd, others) to enforce callers passing at > least 1 RDD. >*/ > def defaultPartitioner(rdd: RDD[_], others: RDD[_]*): Partitioner = { > val rdds = (Seq(rdd) ++ others) > val hasPartitioner = rdds.filter(_.partitioner.exists(_.numPartitions > > 0)) > if (hasPartitioner.nonEmpty) { > hasPartitioner.maxBy(_.partitions.length).partitioner.get > } else { > if (rdd.context.conf.contains("spark.default.parallelism")) { > new HashPartitioner(rdd.context.defaultParallelism) > } else { > new HashPartitioner(rdds.map(_.partitions.length).max) > } > } > } > {noformat} > Given this suppose we have two pair RDDs. > RDD1 : A small RDD which fewer data and partitions > RDD2: A huge RDD which has loads of data and partitions > Now in the code if we were to have a cogroup > {noformat} > val RDD3 = RDD1.cogroup(RDD2) > {noformat} > there is a case where this could lead to the SPARK-6235 Bug which is If RDD1 > has a partitioner when it is being called into a cogroup. This is because the > cogroups partitions are then decided by the partitioner and could lead to the > huge RDD2 being shuffled into a small number of partitions. > One way is probably to add a safety check here that would ignore the >
[jira] [Commented] (SPARK-22465) Cogroup of two disproportionate RDDs could lead into 2G limit BUG
[ https://issues.apache.org/jira/browse/SPARK-22465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16293795#comment-16293795 ] Apache Spark commented on SPARK-22465: -- User 'sujithjay' has created a pull request for this issue: https://github.com/apache/spark/pull/20002 > Cogroup of two disproportionate RDDs could lead into 2G limit BUG > - > > Key: SPARK-22465 > URL: https://issues.apache.org/jira/browse/SPARK-22465 > Project: Spark > Issue Type: Bug > Components: Spark Core >Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 1.2.0, 1.2.1, 1.2.2, > 1.3.0, 1.3.1, 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.5.2, 1.6.0, 1.6.1, 1.6.2, 1.6.3, > 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1, 2.1.2, 2.2.0 >Reporter: Amit Kumar >Priority: Critical > > While running my spark pipeline, it failed with the following exception > {noformat} > 2017-11-03 04:49:09,776 [Executor task launch worker for task 58670] ERROR > org.apache.spark.executor.Executor - Exception in task 630.0 in stage 28.0 > (TID 58670) > java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE > at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:869) > at > org.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:103) > at > org.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:91) > at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1303) > at org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:105) > at > org.apache.spark.storage.BlockManager.getLocalValues(BlockManager.scala:469) > at > org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:705) > at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:334) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:285) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) > at org.apache.spark.scheduler.Task.run(Task.scala:99) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:324) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {noformat} > After debugging I found that the issue lies with how spark handles cogroup of > two RDDs. > Here is the relevant code from apache spark > {noformat} > /** >* For each key k in `this` or `other`, return a resulting RDD that > contains a tuple with the >* list of values for that key in `this` as well as `other`. >*/ > def cogroup[W](other: RDD[(K, W)]): RDD[(K, (Iterable[V], Iterable[W]))] = > self.withScope { > cogroup(other, defaultPartitioner(self, other)) > } > /** >* Choose a partitioner to use for a cogroup-like operation between a > number of RDDs. >* >* If any of the RDDs already has a partitioner, choose that one. >* >* Otherwise, we use a default HashPartitioner. For the number of > partitions, if >* spark.default.parallelism is set, then we'll use the value from > SparkContext >* defaultParallelism, otherwise we'll use the max number of upstream > partitions. >* >* Unless spark.default.parallelism is set, the number of partitions will > be the >* same as the number of partitions in the largest upstream RDD, as this > should >* be least likely to cause out-of-memory errors. >* >* We use two method parameters (rdd, others) to enforce callers passing at > least 1 RDD. >*/ > def defaultPartitioner(rdd: RDD[_], others: RDD[_]*): Partitioner = { > val rdds = (Seq(rdd) ++ others) > val hasPartitioner = rdds.filter(_.partitioner.exists(_.numPartitions > > 0)) > if (hasPartitioner.nonEmpty) { > hasPartitioner.maxBy(_.partitions.length).partitioner.get > } else { > if (rdd.context.conf.contains("spark.default.parallelism")) { > new HashPartitioner(rdd.context.defaultParallelism) > } else { > new HashPartitioner(rdds.map(_.partitions.length).max) > } > } > } > {noformat} > Given this suppose we have two pair RDDs. > RDD1 : A small RDD which fewer data and partitions > RDD2: A huge RDD which has loads of data and partitions > Now in the code if we were to have a cogroup > {noformat} > val RDD3 = RDD1.cogroup(RDD2) > {noformat} > there is a case where this could lead to the SPARK-6235 Bug which is If RDD1 > has a partitioner when it is being called into a cogroup. This is because the > cogroups partitions are then decided by the partitioner and could lead to the > huge RDD2 being shuffled into a small number of partitions. > One way is probably to add a safety check here that would ignore the > partitioner if the number of
[jira] [Commented] (SPARK-22465) Cogroup of two disproportionate RDDs could lead into 2G limit BUG
[ https://issues.apache.org/jira/browse/SPARK-22465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292608#comment-16292608 ] Thomas Graves commented on SPARK-22465: --- Yes I think that makes sense. > Cogroup of two disproportionate RDDs could lead into 2G limit BUG > - > > Key: SPARK-22465 > URL: https://issues.apache.org/jira/browse/SPARK-22465 > Project: Spark > Issue Type: Bug > Components: Spark Core >Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 1.2.0, 1.2.1, 1.2.2, > 1.3.0, 1.3.1, 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.5.2, 1.6.0, 1.6.1, 1.6.2, 1.6.3, > 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1, 2.1.2, 2.2.0 >Reporter: Amit Kumar >Priority: Critical > > While running my spark pipeline, it failed with the following exception > {noformat} > 2017-11-03 04:49:09,776 [Executor task launch worker for task 58670] ERROR > org.apache.spark.executor.Executor - Exception in task 630.0 in stage 28.0 > (TID 58670) > java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE > at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:869) > at > org.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:103) > at > org.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:91) > at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1303) > at org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:105) > at > org.apache.spark.storage.BlockManager.getLocalValues(BlockManager.scala:469) > at > org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:705) > at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:334) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:285) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) > at org.apache.spark.scheduler.Task.run(Task.scala:99) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:324) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {noformat} > After debugging I found that the issue lies with how spark handles cogroup of > two RDDs. > Here is the relevant code from apache spark > {noformat} > /** >* For each key k in `this` or `other`, return a resulting RDD that > contains a tuple with the >* list of values for that key in `this` as well as `other`. >*/ > def cogroup[W](other: RDD[(K, W)]): RDD[(K, (Iterable[V], Iterable[W]))] = > self.withScope { > cogroup(other, defaultPartitioner(self, other)) > } > /** >* Choose a partitioner to use for a cogroup-like operation between a > number of RDDs. >* >* If any of the RDDs already has a partitioner, choose that one. >* >* Otherwise, we use a default HashPartitioner. For the number of > partitions, if >* spark.default.parallelism is set, then we'll use the value from > SparkContext >* defaultParallelism, otherwise we'll use the max number of upstream > partitions. >* >* Unless spark.default.parallelism is set, the number of partitions will > be the >* same as the number of partitions in the largest upstream RDD, as this > should >* be least likely to cause out-of-memory errors. >* >* We use two method parameters (rdd, others) to enforce callers passing at > least 1 RDD. >*/ > def defaultPartitioner(rdd: RDD[_], others: RDD[_]*): Partitioner = { > val rdds = (Seq(rdd) ++ others) > val hasPartitioner = rdds.filter(_.partitioner.exists(_.numPartitions > > 0)) > if (hasPartitioner.nonEmpty) { > hasPartitioner.maxBy(_.partitions.length).partitioner.get > } else { > if (rdd.context.conf.contains("spark.default.parallelism")) { > new HashPartitioner(rdd.context.defaultParallelism) > } else { > new HashPartitioner(rdds.map(_.partitions.length).max) > } > } > } > {noformat} > Given this suppose we have two pair RDDs. > RDD1 : A small RDD which fewer data and partitions > RDD2: A huge RDD which has loads of data and partitions > Now in the code if we were to have a cogroup > {noformat} > val RDD3 = RDD1.cogroup(RDD2) > {noformat} > there is a case where this could lead to the SPARK-6235 Bug which is If RDD1 > has a partitioner when it is being called into a cogroup. This is because the > cogroups partitions are then decided by the partitioner and could lead to the > huge RDD2 being shuffled into a small number of partitions. > One way is probably to add a safety check here that would ignore the > partitioner if the number of partitions on the two RDDs are very different in > magnitude. -- This
[jira] [Commented] (SPARK-22465) Cogroup of two disproportionate RDDs could lead into 2G limit BUG
[ https://issues.apache.org/jira/browse/SPARK-22465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292581#comment-16292581 ] Sujith Jay Nair commented on SPARK-22465: - Would something along the lines of 'add a safety-check that ignores the partitioner if the number of partitions on the RDDs are very different in magnitude', as the reporter suggests, be a satisfactory solution? Any pointers here would be very helpful. > Cogroup of two disproportionate RDDs could lead into 2G limit BUG > - > > Key: SPARK-22465 > URL: https://issues.apache.org/jira/browse/SPARK-22465 > Project: Spark > Issue Type: Bug > Components: Spark Core >Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 1.2.0, 1.2.1, 1.2.2, > 1.3.0, 1.3.1, 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.5.2, 1.6.0, 1.6.1, 1.6.2, 1.6.3, > 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1, 2.1.2, 2.2.0 >Reporter: Amit Kumar >Priority: Critical > > While running my spark pipeline, it failed with the following exception > {noformat} > 2017-11-03 04:49:09,776 [Executor task launch worker for task 58670] ERROR > org.apache.spark.executor.Executor - Exception in task 630.0 in stage 28.0 > (TID 58670) > java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE > at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:869) > at > org.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:103) > at > org.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:91) > at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1303) > at org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:105) > at > org.apache.spark.storage.BlockManager.getLocalValues(BlockManager.scala:469) > at > org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:705) > at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:334) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:285) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) > at org.apache.spark.scheduler.Task.run(Task.scala:99) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:324) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {noformat} > After debugging I found that the issue lies with how spark handles cogroup of > two RDDs. > Here is the relevant code from apache spark > {noformat} > /** >* For each key k in `this` or `other`, return a resulting RDD that > contains a tuple with the >* list of values for that key in `this` as well as `other`. >*/ > def cogroup[W](other: RDD[(K, W)]): RDD[(K, (Iterable[V], Iterable[W]))] = > self.withScope { > cogroup(other, defaultPartitioner(self, other)) > } > /** >* Choose a partitioner to use for a cogroup-like operation between a > number of RDDs. >* >* If any of the RDDs already has a partitioner, choose that one. >* >* Otherwise, we use a default HashPartitioner. For the number of > partitions, if >* spark.default.parallelism is set, then we'll use the value from > SparkContext >* defaultParallelism, otherwise we'll use the max number of upstream > partitions. >* >* Unless spark.default.parallelism is set, the number of partitions will > be the >* same as the number of partitions in the largest upstream RDD, as this > should >* be least likely to cause out-of-memory errors. >* >* We use two method parameters (rdd, others) to enforce callers passing at > least 1 RDD. >*/ > def defaultPartitioner(rdd: RDD[_], others: RDD[_]*): Partitioner = { > val rdds = (Seq(rdd) ++ others) > val hasPartitioner = rdds.filter(_.partitioner.exists(_.numPartitions > > 0)) > if (hasPartitioner.nonEmpty) { > hasPartitioner.maxBy(_.partitions.length).partitioner.get > } else { > if (rdd.context.conf.contains("spark.default.parallelism")) { > new HashPartitioner(rdd.context.defaultParallelism) > } else { > new HashPartitioner(rdds.map(_.partitions.length).max) > } > } > } > {noformat} > Given this suppose we have two pair RDDs. > RDD1 : A small RDD which fewer data and partitions > RDD2: A huge RDD which has loads of data and partitions > Now in the code if we were to have a cogroup > {noformat} > val RDD3 = RDD1.cogroup(RDD2) > {noformat} > there is a case where this could lead to the SPARK-6235 Bug which is If RDD1 > has a partitioner when it is being called into a cogroup. This is because the > cogroups partitions are then decided by the partitioner and could lead to the > huge
[jira] [Commented] (SPARK-22465) Cogroup of two disproportionate RDDs could lead into 2G limit BUG
[ https://issues.apache.org/jira/browse/SPARK-22465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292569#comment-16292569 ] Thomas Graves commented on SPARK-22465: --- I don't have time at the moment to work on this so if you want to pick it up that would be great. > Cogroup of two disproportionate RDDs could lead into 2G limit BUG > - > > Key: SPARK-22465 > URL: https://issues.apache.org/jira/browse/SPARK-22465 > Project: Spark > Issue Type: Bug > Components: Spark Core >Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 1.2.0, 1.2.1, 1.2.2, > 1.3.0, 1.3.1, 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.5.2, 1.6.0, 1.6.1, 1.6.2, 1.6.3, > 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1, 2.1.2, 2.2.0 >Reporter: Amit Kumar >Priority: Critical > > While running my spark pipeline, it failed with the following exception > {noformat} > 2017-11-03 04:49:09,776 [Executor task launch worker for task 58670] ERROR > org.apache.spark.executor.Executor - Exception in task 630.0 in stage 28.0 > (TID 58670) > java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE > at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:869) > at > org.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:103) > at > org.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:91) > at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1303) > at org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:105) > at > org.apache.spark.storage.BlockManager.getLocalValues(BlockManager.scala:469) > at > org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:705) > at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:334) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:285) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) > at org.apache.spark.scheduler.Task.run(Task.scala:99) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:324) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {noformat} > After debugging I found that the issue lies with how spark handles cogroup of > two RDDs. > Here is the relevant code from apache spark > {noformat} > /** >* For each key k in `this` or `other`, return a resulting RDD that > contains a tuple with the >* list of values for that key in `this` as well as `other`. >*/ > def cogroup[W](other: RDD[(K, W)]): RDD[(K, (Iterable[V], Iterable[W]))] = > self.withScope { > cogroup(other, defaultPartitioner(self, other)) > } > /** >* Choose a partitioner to use for a cogroup-like operation between a > number of RDDs. >* >* If any of the RDDs already has a partitioner, choose that one. >* >* Otherwise, we use a default HashPartitioner. For the number of > partitions, if >* spark.default.parallelism is set, then we'll use the value from > SparkContext >* defaultParallelism, otherwise we'll use the max number of upstream > partitions. >* >* Unless spark.default.parallelism is set, the number of partitions will > be the >* same as the number of partitions in the largest upstream RDD, as this > should >* be least likely to cause out-of-memory errors. >* >* We use two method parameters (rdd, others) to enforce callers passing at > least 1 RDD. >*/ > def defaultPartitioner(rdd: RDD[_], others: RDD[_]*): Partitioner = { > val rdds = (Seq(rdd) ++ others) > val hasPartitioner = rdds.filter(_.partitioner.exists(_.numPartitions > > 0)) > if (hasPartitioner.nonEmpty) { > hasPartitioner.maxBy(_.partitions.length).partitioner.get > } else { > if (rdd.context.conf.contains("spark.default.parallelism")) { > new HashPartitioner(rdd.context.defaultParallelism) > } else { > new HashPartitioner(rdds.map(_.partitions.length).max) > } > } > } > {noformat} > Given this suppose we have two pair RDDs. > RDD1 : A small RDD which fewer data and partitions > RDD2: A huge RDD which has loads of data and partitions > Now in the code if we were to have a cogroup > {noformat} > val RDD3 = RDD1.cogroup(RDD2) > {noformat} > there is a case where this could lead to the SPARK-6235 Bug which is If RDD1 > has a partitioner when it is being called into a cogroup. This is because the > cogroups partitions are then decided by the partitioner and could lead to the > huge RDD2 being shuffled into a small number of partitions. > One way is probably to add a safety check here that would ignore the > partitioner if the number of
[jira] [Commented] (SPARK-22465) Cogroup of two disproportionate RDDs could lead into 2G limit BUG
[ https://issues.apache.org/jira/browse/SPARK-22465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292327#comment-16292327 ] Sujith Jay Nair commented on SPARK-22465: - Hi [~tgraves], is there a plan to resolve this behaviour of cogroup, outside of the umbrella ticket for fixing 2G limit ([SPARK-6235]). I wish to chip in if that is the case. Thank you. > Cogroup of two disproportionate RDDs could lead into 2G limit BUG > - > > Key: SPARK-22465 > URL: https://issues.apache.org/jira/browse/SPARK-22465 > Project: Spark > Issue Type: Bug > Components: Spark Core >Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 1.2.0, 1.2.1, 1.2.2, > 1.3.0, 1.3.1, 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.5.2, 1.6.0, 1.6.1, 1.6.2, 1.6.3, > 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1, 2.1.2, 2.2.0 >Reporter: Amit Kumar >Priority: Critical > > While running my spark pipeline, it failed with the following exception > {noformat} > 2017-11-03 04:49:09,776 [Executor task launch worker for task 58670] ERROR > org.apache.spark.executor.Executor - Exception in task 630.0 in stage 28.0 > (TID 58670) > java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE > at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:869) > at > org.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:103) > at > org.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:91) > at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1303) > at org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:105) > at > org.apache.spark.storage.BlockManager.getLocalValues(BlockManager.scala:469) > at > org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:705) > at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:334) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:285) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) > at org.apache.spark.scheduler.Task.run(Task.scala:99) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:324) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {noformat} > After debugging I found that the issue lies with how spark handles cogroup of > two RDDs. > Here is the relevant code from apache spark > {noformat} > /** >* For each key k in `this` or `other`, return a resulting RDD that > contains a tuple with the >* list of values for that key in `this` as well as `other`. >*/ > def cogroup[W](other: RDD[(K, W)]): RDD[(K, (Iterable[V], Iterable[W]))] = > self.withScope { > cogroup(other, defaultPartitioner(self, other)) > } > /** >* Choose a partitioner to use for a cogroup-like operation between a > number of RDDs. >* >* If any of the RDDs already has a partitioner, choose that one. >* >* Otherwise, we use a default HashPartitioner. For the number of > partitions, if >* spark.default.parallelism is set, then we'll use the value from > SparkContext >* defaultParallelism, otherwise we'll use the max number of upstream > partitions. >* >* Unless spark.default.parallelism is set, the number of partitions will > be the >* same as the number of partitions in the largest upstream RDD, as this > should >* be least likely to cause out-of-memory errors. >* >* We use two method parameters (rdd, others) to enforce callers passing at > least 1 RDD. >*/ > def defaultPartitioner(rdd: RDD[_], others: RDD[_]*): Partitioner = { > val rdds = (Seq(rdd) ++ others) > val hasPartitioner = rdds.filter(_.partitioner.exists(_.numPartitions > > 0)) > if (hasPartitioner.nonEmpty) { > hasPartitioner.maxBy(_.partitions.length).partitioner.get > } else { > if (rdd.context.conf.contains("spark.default.parallelism")) { > new HashPartitioner(rdd.context.defaultParallelism) > } else { > new HashPartitioner(rdds.map(_.partitions.length).max) > } > } > } > {noformat} > Given this suppose we have two pair RDDs. > RDD1 : A small RDD which fewer data and partitions > RDD2: A huge RDD which has loads of data and partitions > Now in the code if we were to have a cogroup > {noformat} > val RDD3 = RDD1.cogroup(RDD2) > {noformat} > there is a case where this could lead to the SPARK-6235 Bug which is If RDD1 > has a partitioner when it is being called into a cogroup. This is because the > cogroups partitions are then decided by the partitioner and could lead to the > huge RDD2 being shuffled into a small number of partitions. > One way is
[jira] [Commented] (SPARK-22465) Cogroup of two disproportionate RDDs could lead into 2G limit BUG
[ https://issues.apache.org/jira/browse/SPARK-22465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246595#comment-16246595 ] Thomas Graves commented on SPARK-22465: --- Its not strictly the 2G limit. He did hit that but he hit it because of the default behavior of cogroup. I think this jira was filed to look at that to make the behavior better. So I think the last couple sentences in the description refer to that. > Cogroup of two disproportionate RDDs could lead into 2G limit BUG > - > > Key: SPARK-22465 > URL: https://issues.apache.org/jira/browse/SPARK-22465 > Project: Spark > Issue Type: Bug > Components: Spark Core >Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 1.2.0, 1.2.1, 1.2.2, > 1.3.0, 1.3.1, 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.5.2, 1.6.0, 1.6.1, 1.6.2, 1.6.3, > 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1, 2.1.2, 2.2.0 >Reporter: Amit Kumar >Priority: Critical > > While running my spark pipeline, it failed with the following exception > {noformat} > 2017-11-03 04:49:09,776 [Executor task launch worker for task 58670] ERROR > org.apache.spark.executor.Executor - Exception in task 630.0 in stage 28.0 > (TID 58670) > java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE > at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:869) > at > org.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:103) > at > org.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:91) > at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1303) > at org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:105) > at > org.apache.spark.storage.BlockManager.getLocalValues(BlockManager.scala:469) > at > org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:705) > at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:334) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:285) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) > at org.apache.spark.scheduler.Task.run(Task.scala:99) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:324) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {noformat} > After debugging I found that the issue lies with how spark handles cogroup of > two RDDs. > Here is the relevant code from apache spark > {noformat} > /** >* For each key k in `this` or `other`, return a resulting RDD that > contains a tuple with the >* list of values for that key in `this` as well as `other`. >*/ > def cogroup[W](other: RDD[(K, W)]): RDD[(K, (Iterable[V], Iterable[W]))] = > self.withScope { > cogroup(other, defaultPartitioner(self, other)) > } > /** >* Choose a partitioner to use for a cogroup-like operation between a > number of RDDs. >* >* If any of the RDDs already has a partitioner, choose that one. >* >* Otherwise, we use a default HashPartitioner. For the number of > partitions, if >* spark.default.parallelism is set, then we'll use the value from > SparkContext >* defaultParallelism, otherwise we'll use the max number of upstream > partitions. >* >* Unless spark.default.parallelism is set, the number of partitions will > be the >* same as the number of partitions in the largest upstream RDD, as this > should >* be least likely to cause out-of-memory errors. >* >* We use two method parameters (rdd, others) to enforce callers passing at > least 1 RDD. >*/ > def defaultPartitioner(rdd: RDD[_], others: RDD[_]*): Partitioner = { > val rdds = (Seq(rdd) ++ others) > val hasPartitioner = rdds.filter(_.partitioner.exists(_.numPartitions > > 0)) > if (hasPartitioner.nonEmpty) { > hasPartitioner.maxBy(_.partitions.length).partitioner.get > } else { > if (rdd.context.conf.contains("spark.default.parallelism")) { > new HashPartitioner(rdd.context.defaultParallelism) > } else { > new HashPartitioner(rdds.map(_.partitions.length).max) > } > } > } > {noformat} > Given this suppose we have two pair RDDs. > RDD1 : A small RDD which fewer data and partitions > RDD2: A huge RDD which has loads of data and partitions > Now in the code if we were to have a cogroup > {noformat} > val RDD3 = RDD1.cogroup(RDD2) > {noformat} > there is a case where this could lead to the SPARK-6235 Bug which is If RDD1 > has a partitioner when it is being called into a cogroup. This is because the > cogroups partitions are then decided by the partitioner and could lead to the > huge RDD2
[jira] [Commented] (SPARK-22465) Cogroup of two disproportionate RDDs could lead into 2G limit BUG
[ https://issues.apache.org/jira/browse/SPARK-22465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242856#comment-16242856 ] Sean Owen commented on SPARK-22465: --- Is this not indeed just the 2G limit again? You can work around this by repartitioning the larger RDD, right? > Cogroup of two disproportionate RDDs could lead into 2G limit BUG > - > > Key: SPARK-22465 > URL: https://issues.apache.org/jira/browse/SPARK-22465 > Project: Spark > Issue Type: Bug > Components: Spark Core >Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 1.2.0, 1.2.1, 1.2.2, > 1.3.0, 1.3.1, 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.5.2, 1.6.0, 1.6.1, 1.6.2, 1.6.3, > 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1, 2.1.2, 2.2.0 >Reporter: Amit Kumar >Priority: Critical > > While running my spark pipeline, it failed with the following exception > {noformat} > 2017-11-03 04:49:09,776 [Executor task launch worker for task 58670] ERROR > org.apache.spark.executor.Executor - Exception in task 630.0 in stage 28.0 > (TID 58670) > java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE > at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:869) > at > org.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:103) > at > org.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:91) > at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1303) > at org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:105) > at > org.apache.spark.storage.BlockManager.getLocalValues(BlockManager.scala:469) > at > org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:705) > at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:334) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:285) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) > at org.apache.spark.scheduler.Task.run(Task.scala:99) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:324) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {noformat} > After debugging I found that the issue lies with how spark handles cogroup of > two RDDs. > Here is the relevant code from apache spark > {noformat} > /** >* For each key k in `this` or `other`, return a resulting RDD that > contains a tuple with the >* list of values for that key in `this` as well as `other`. >*/ > def cogroup[W](other: RDD[(K, W)]): RDD[(K, (Iterable[V], Iterable[W]))] = > self.withScope { > cogroup(other, defaultPartitioner(self, other)) > } > /** >* Choose a partitioner to use for a cogroup-like operation between a > number of RDDs. >* >* If any of the RDDs already has a partitioner, choose that one. >* >* Otherwise, we use a default HashPartitioner. For the number of > partitions, if >* spark.default.parallelism is set, then we'll use the value from > SparkContext >* defaultParallelism, otherwise we'll use the max number of upstream > partitions. >* >* Unless spark.default.parallelism is set, the number of partitions will > be the >* same as the number of partitions in the largest upstream RDD, as this > should >* be least likely to cause out-of-memory errors. >* >* We use two method parameters (rdd, others) to enforce callers passing at > least 1 RDD. >*/ > def defaultPartitioner(rdd: RDD[_], others: RDD[_]*): Partitioner = { > val rdds = (Seq(rdd) ++ others) > val hasPartitioner = rdds.filter(_.partitioner.exists(_.numPartitions > > 0)) > if (hasPartitioner.nonEmpty) { > hasPartitioner.maxBy(_.partitions.length).partitioner.get > } else { > if (rdd.context.conf.contains("spark.default.parallelism")) { > new HashPartitioner(rdd.context.defaultParallelism) > } else { > new HashPartitioner(rdds.map(_.partitions.length).max) > } > } > } > {noformat} > Given this suppose we have two pair RDDs. > RDD1 : A small RDD which fewer data and partitions > RDD2: A huge RDD which has loads of data and partitions > Now in the code if we were to have a cogroup > {noformat} > val RDD3 = RDD1.cogroup(RDD2) > {noformat} > there is a case where this could lead to the SPARK-6235 Bug which is If RDD1 > has a partitioner when it is being called into a cogroup. This is because the > cogroups partitions are then decided by the partitioner and could lead to the > huge RDD2 being shuffled into a small number of partitions. > One way is probably to add a safety check here that would ignore the > partitioner if the number of