Josh

Looked closer, I think you are correct, not a racing condition. This only
shows up on persisting string, other data format looks fine.

Also whe we reverted to 1.4 the issue's gone.

Thanks

On Thursday, 24 September 2015, Josh Rosen <rosenvi...@gmail.com> wrote:

> I believe that this is an instance of
> https://issues.apache.org/jira/browse/SPARK-10422, which should be fixed
> in upcoming 1.5.1 release.
>
> On Thu, Sep 24, 2015 at 12:52 PM, Mark Hamstra <m...@clearstorydata.com
> <javascript:_e(%7B%7D,'cvml','m...@clearstorydata.com');>> wrote:
>
>> Where do you see a race in the DAGScheduler?  On a quick look at your
>> stack trace, this just looks to me like a Job where a Stage failed and then
>> the DAGScheduler aborted the failed Job.
>>
>> On Thu, Sep 24, 2015 at 12:00 PM, robin_up <robin...@gmail.com
>> <javascript:_e(%7B%7D,'cvml','robin...@gmail.com');>> wrote:
>>
>>> Hi
>>>
>>> After upgrade to 1.5, we found a possible racing condition in
>>> DAGScheduler
>>> similar to https://issues.apache.org/jira/browse/SPARK-4454.
>>>
>>> Here is the code creating the problem:
>>>
>>>
>>> app_cpm_load = sc.textFile("/user/a/app_ecpm.txt").map(lambda x:
>>> x.split(',')).map(lambda p: Row(app_id=str(p[0]), loc_filter=str(p[1]),
>>> cpm_required=float(p[2]) ))
>>> app_cpm = sqlContext.createDataFrame(app_cpm_load)
>>> app_cpm.registerTempTable("app_cpm")
>>>
>>> app_rev_cpm_sql = '''select loc_filter from app_cpm'''
>>> app_rev_cpm = sqlContext.sql(app_rev_cpm_sql)
>>> app_rev_cpm.cache()
>>> app_rev_cpm.show()
>>>
>>>
>>>
>>> Traceback (most recent call last):
>>>   File "<stdin>", line 1, in <module>
>>>   File "/opt/spark/python/pyspark/sql/dataframe.py", line 256, in show
>>>     print(self._jdf.showString(n, truncate))
>>>   File "/opt/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py",
>>> line 538, in __call__
>>>   File "/opt/spark/python/pyspark/sql/utils.py", line 36, in deco
>>>     return f(*a, **kw)
>>>   File "/opt/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py",
>>> line
>>> 300, in get_return_value
>>> py4j.protocol.Py4JJavaError: An error occurred while calling
>>> o46.showString.
>>> : org.apache.spark.SparkException: Job aborted due to stage failure:
>>> Task 0
>>> in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage
>>> 1.0
>>> (TID 4, spark-yarn-dn02): java.util.NoSuchElementException: key not
>>> found:
>>> UK
>>>         at scala.collection.MapLike$class.default(MapLike.scala:228)
>>>         at scala.collection.AbstractMap.default(Map.scala:58)
>>>         at scala.collection.mutable.HashMap.apply(HashMap.scala:64)
>>>         at
>>>
>>> org.apache.spark.sql.columnar.compression.DictionaryEncoding$Encoder.compress(compressionSchemes.scala:258)
>>>         at
>>>
>>> org.apache.spark.sql.columnar.compression.CompressibleColumnBuilder$class.build(CompressibleColumnBuilder.scala:110)
>>>         at
>>>
>>> org.apache.spark.sql.columnar.NativeColumnBuilder.build(ColumnBuilder.scala:87)
>>>         at
>>>
>>> org.apache.spark.sql.columnar.InMemoryRelation$$anonfun$3$$anon$1$$anonfun$next$2.apply(InMemoryColumnarTableScan.scala:152)
>>>         at
>>>
>>> org.apache.spark.sql.columnar.InMemoryRelation$$anonfun$3$$anon$1$$anonfun$next$2.apply(InMemoryColumnarTableScan.scala:152)
>>>         at
>>>
>>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>>>         at
>>>
>>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>>>         at
>>>
>>> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>>>         at
>>> scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
>>>         at
>>> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>>>         at
>>> scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
>>>         at
>>>
>>> org.apache.spark.sql.columnar.InMemoryRelation$$anonfun$3$$anon$1.next(InMemoryColumnarTableScan.scala:152)
>>>         at
>>>
>>> org.apache.spark.sql.columnar.InMemoryRelation$$anonfun$3$$anon$1.next(InMemoryColumnarTableScan.scala:120)
>>>         at
>>> org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:278)
>>>         at
>>> org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:171)
>>>         at
>>> org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:78)
>>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:262)
>>>         at
>>> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>>>         at
>>> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
>>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
>>>         at
>>> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>>>         at
>>> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
>>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
>>>         at
>>> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>>>         at org.apache.spark.scheduler.Task.run(Task.scala:88)
>>>         at
>>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>>>         at
>>>
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>         at
>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>         at java.lang.Thread.run(Thread.java:745)
>>>
>>> Driver stacktrace:
>>>         at
>>> org.apache.spark.scheduler.DAGScheduler.org
>>> $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1280)
>>>         at
>>>
>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1268)
>>>         at
>>>
>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1267)
>>>         at
>>>
>>> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>>>         at
>>> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>>>         at
>>>
>>> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1267)
>>>         at
>>>
>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
>>>         at
>>>
>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
>>>         at scala.Option.foreach(Option.scala:236)
>>>         at
>>>
>>> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
>>>         at
>>>
>>> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1493)
>>>         at
>>>
>>> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1455)
>>>         at
>>>
>>> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1444)
>>>         at
>>> org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>>>         at
>>> org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
>>>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:1813)
>>>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:1826)
>>>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:1839)
>>>         at
>>> org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:215)
>>>         at
>>>
>>> org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:207)
>>>         at
>>>
>>> org.apache.spark.sql.DataFrame$$anonfun$collect$1.apply(DataFrame.scala:1386)
>>>         at
>>>
>>> org.apache.spark.sql.DataFrame$$anonfun$collect$1.apply(DataFrame.scala:1386)
>>>         at
>>>
>>> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
>>>         at
>>> org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:1904)
>>>         at org.apache.spark.sql.DataFrame.collect(DataFrame.scala:1385)
>>>         at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1315)
>>>         at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1378)
>>>         at org.apache.spark.sql.DataFrame.showString(DataFrame.scala:178)
>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>         at
>>>
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>         at
>>>
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>         at java.lang.reflect.Method.invoke(Method.java:606)
>>>         at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
>>>         at
>>> py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
>>>         at py4j.Gateway.invoke(Gateway.java:259)
>>>         at
>>> py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
>>>         at py4j.commands.CallCommand.execute(CallCommand.java:79)
>>>         at py4j.GatewayConnection.run(GatewayConnection.java:207)
>>>         at java.lang.Thread.run(Thread.java:745)
>>> Caused by: java.util.NoSuchElementException: key not found: UK
>>>         at scala.collection.MapLike$class.default(MapLike.scala:228)
>>>         at scala.collection.AbstractMap.default(Map.scala:58)
>>>         at scala.collection.mutable.HashMap.apply(HashMap.scala:64)
>>>         at
>>>
>>> org.apache.spark.sql.columnar.compression.DictionaryEncoding$Encoder.compress(compressionSchemes.scala:258)
>>>         at
>>>
>>> org.apache.spark.sql.columnar.compression.CompressibleColumnBuilder$class.build(CompressibleColumnBuilder.scala:110)
>>>         at
>>>
>>> org.apache.spark.sql.columnar.NativeColumnBuilder.build(ColumnBuilder.scala:87)
>>>         at
>>>
>>> org.apache.spark.sql.columnar.InMemoryRelation$$anonfun$3$$anon$1$$anonfun$next$2.apply(InMemoryColumnarTableScan.scala:152)
>>>         at
>>>
>>> org.apache.spark.sql.columnar.InMemoryRelation$$anonfun$3$$anon$1$$anonfun$next$2.apply(InMemoryColumnarTableScan.scala:152)
>>>         at
>>>
>>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>>>         at
>>>
>>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>>>         at
>>>
>>> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>>>         at
>>> scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
>>>         at
>>> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>>>         at
>>> scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
>>>         at
>>>
>>> org.apache.spark.sql.columnar.InMemoryRelation$$anonfun$3$$anon$1.next(InMemoryColumnarTableScan.scala:152)
>>>         at
>>>
>>> org.apache.spark.sql.columnar.InMemoryRelation$$anonfun$3$$anon$1.next(InMemoryColumnarTableScan.scala:120)
>>>         at
>>> org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:278)
>>>         at
>>> org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:171)
>>>         at
>>> org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:78)
>>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:262)
>>>         at
>>> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>>>         at
>>> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
>>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
>>>         at
>>> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>>>         at
>>> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
>>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
>>>         at
>>> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>>>         at org.apache.spark.scheduler.Task.run(Task.scala:88)
>>>         at
>>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>>>         at
>>>
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>         at
>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>         ... 1 more
>>>
>>>
>>>
>>>
>>>
>>> -----
>>> -- Robin Li
>>> --
>>> View this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/Potential-racing-condition-in-DAGScheduler-when-Spark-1-5-caching-tp24810.html
>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>> <javascript:_e(%7B%7D,'cvml','user-unsubscr...@spark.apache.org');>
>>> For additional commands, e-mail: user-h...@spark.apache.org
>>> <javascript:_e(%7B%7D,'cvml','user-h...@spark.apache.org');>
>>>
>>>
>>
>

-- 
Robin

Reply via email to