[ 
https://issues.apache.org/jira/browse/SPARK-29055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

George Papa updated SPARK-29055:
--------------------------------
    Attachment:     (was: image-2019-09-11-16-14-34-963.png)

> Memory leak in Spark Driver
> ---------------------------
>
>                 Key: SPARK-29055
>                 URL: https://issues.apache.org/jira/browse/SPARK-29055
>             Project: Spark
>          Issue Type: Bug
>          Components: Block Manager, Spark Core
>    Affects Versions: 2.3.3, 2.4.3, 2.4.4
>            Reporter: George Papa
>            Priority: Major
>
>  *ISSUE*
> I used Spark 2.1.1 and I upgrade my Spark a the latest version 2.4.4. I 
> observed from Spark UI that the driver memory is{color:#FF0000} increasing 
> continuously{color} and after of long running I had the following error : 
> {color:#FF0000}java.lang.OutOfMemoryError: GC overhead limit exceeded{color}
> In Spark 2.1.1 the driver memory was extremely low and after the run of 
> ContextCleaner and BlockManager the memory was decreasing.
> Also I tested with the following Spark versions 2.3.3, 2.4.3 and I had the 
> same behavior.
>  
> *HOW TO REPRODUCE THIS BEHAVIOR:*
> I create a very simple application(streaming count_file.py) in order to 
> reproduce this behavior. I created an application which reads csv files from 
> a directory and then count the rows and then remove the processed file.
> {code:java}
> from pyspark.sql import SparkSession
> from pyspark.sql import functions as F
> from pyspark.sql import types as T
> target_dir = "..."
> spark=SparkSession.builder.appName("DataframeCount").getOrCreate()
> while True:
>     for f in os.listdir(target_dir):
>         df = spark.read.load(f, format="csv")
>         print("Number of records: {0}".format(df.count()))
>         
>         os.remove(f)
>         print("File {0} removed successfully!".format(f)){code}
>  
> Also my submit code:
>  
> {code:java}
> spark-submit 
> --master spark://xxx.xxx.xx.xxx
> --deploy-mode client
> --executor-memory 4g
> --executor-cores 3
> --queue streaming count_file.py
> {code}
>  
> *TESTED CASES WITHOUT DIFFERENCE:*
>  * I tested with default settings (spark-defaults.conf)
>  * Add spark.cleaner.periodicGC.interval 1min (or less)
>  * {{Turn spark.cleaner.referenceTracking.blocking}}=false
>  * Run the application in cluster mode
>  * Increase/decrease the resources of the executors and driver
>  * I tested with extraJavaOptions in driver and executor  -XX:+UseG1GC 
> -XX:InitiatingHeapOccupancyPercent=35 -XX:ConcGCThreads=12 
>  
> *DEPENDENCIES*
>  * Operation system: Ubuntu 16.04.3 LTS
>  * Java: jdk1.8.0_131 (tested also with jdk1.8.0_221)
>  * Python: Python 2.7.12
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to