[ 
https://issues.apache.org/jira/browse/SPARK-14261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15277789#comment-15277789
 ] 

Xiaochun Liang commented on SPARK-14261:
----------------------------------------

I tried the fix on Spark-1.6.1, apply the fix in 
ClientWrapper.scala(sql\hive\src\main\scala\org\apache\spark\sql\hive\client\ClientWrapper.scala),
 the test result is promising. The memory snapshot attached 
(8892_MemorySnapshot.PNG) shows the memory snapshot of spark thrift server 
after 21-hour long run. We can see memory used is dropped after query stopped. 
Also the memory snapshots with 4g, 5g, 6g, and 6g_stop_longrunquery, 
CommandProcessorFactory does not take memory any more. The spark thrift server 
runs well with the fix.





> Memory leak in Spark Thrift Server
> ----------------------------------
>
>                 Key: SPARK-14261
>                 URL: https://issues.apache.org/jira/browse/SPARK-14261
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.6.0
>            Reporter: Xiaochun Liang
>         Attachments: 16716_heapdump_64g.PNG, 16716_heapdump_80g.PNG, 
> MemorySnapshot.PNG
>
>
> I am running Spark Thrift server on Windows Server 2012. The Spark Thrift 
> server is launched as Yarn client mode. Its memory usage is increased 
> gradually with the queries in.  I am wondering there is memory leak in Spark 
> Thrift server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to