[ 
https://issues.apache.org/jira/browse/BEAM-1920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268917#comment-16268917
 ] 

ASF GitHub Bot commented on BEAM-1920:
--------------------------------------

jbonofre commented on issue #3808: [BEAM-1920] Add a Spark 2.x support in the 
Spark runner
URL: https://github.com/apache/beam/pull/3808#issuecomment-347557042
 
 
   Potential issue found:
   
   ```
   17/11/27 23:02:23 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 
3, sandbox-hdp.hortonworks.com, executor 1, partition 1, PROCESS_LOCAL, 4837 
bytes)
   17/11/27 23:02:23 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID 2, 
sandbox-hdp.hortonworks.com, executor 1): java.lang.AbstractMethodError: 
org.apache.beam.runners.spark.translation.MultiDoFnFunction.call(Ljava/lang/Object;)Ljava/util/Iterator;
       at 
org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:186)
       at 
org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:186)
       at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:797)
       at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:797)
       at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
       at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
       at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
       at org.apache.spark.rdd.RDD$$anonfun$8.apply(RDD.scala:336)
       at org.apache.spark.rdd.RDD$$anonfun$8.apply(RDD.scala:334)
       at 
org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1038)
       at 
org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1029)
       at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:969)
       at 
org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1029)
       at 
org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:760)
       at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:334)
       at org.apache.spark.rdd.RDD.iterator(RDD.scala:285)
       at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
       at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
       at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
       at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
       at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
       at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
       at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
       at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
       at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
       at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
       at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
       at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
       at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
       at org.apache.spark.scheduler.Task.run(Task.scala:108)
       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
       at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
       at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
       at java.lang.Thread.run(Thread.java:748)
   ```
   I'm checking ...

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


> Add Spark 2.x support in Spark runner
> -------------------------------------
>
>                 Key: BEAM-1920
>                 URL: https://issues.apache.org/jira/browse/BEAM-1920
>             Project: Beam
>          Issue Type: Improvement
>          Components: runner-spark
>            Reporter: Jean-Baptiste Onofré
>            Assignee: Jean-Baptiste Onofré
>
> I have a branch working with both Spark 1 and Spark 2 backend.
> I'm preparing a pull request about that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to