Github user viirya commented on the issue:

    https://github.com/apache/spark/pull/15916
  
    @rxin An exception will be thrown, not just a warning message. This 
exception is thrown at `Executor` after it calls 
`taskMemoryManager.cleanUpAllAllocatedMemory` and finds there are memory not 
released after the task completion.
    
    The exception looks like:
    
        [info] - SPARK-18487: Consume all elements for show/take to avoid 
memory leak *** FAILED *** (1 second, 73 milliseconds)
        [info]   org.apache.spark.SparkException: Job aborted due to stage 
failure: Task 0 in stage 179.0 failed 1 times, most recent failure: Lost task 
0.0 in stage 179.0 (TID 501, localhost, executor driver): 
org.apache.spark.SparkException: Managed memory leak detected; size = 33816576 
bytes, TID = 501
        [info]  at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:295)
        [info]  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        [info]  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        [info]  at java.lang.Thread.run(Thread.java:745)
        [info] 
        [info] Driver stacktrace:
        [info]   at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentS
 tages(DAGScheduler.scala:1436)
        [info]   at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1424)
        [info]   at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
        [info]   at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        [info]   at 
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        [info]   at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1423)
        [info]   at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
        [info]   at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
        [info]   at scala.Option.foreach(Option.scala:257)
        ...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to