How much machines are there in your standalone cluster?

I am not using tachyon.


GC can not help me... Can anyone help ?


my configuration:


spark.deploy.spreadOut false
spark.eventLog.enabled true
spark.executor.cores 24


spark.ui.retainedJobs 10
spark.ui.retainedStages 10
spark.history.retainedApplications 5
spark.deploy.retainedApplications 10
spark.deploy.retainedDrivers  10
spark.streaming.ui.retainedBatches 10
spark.sql.thriftserver.ui.retainedSessions 10
spark.sql.thriftserver.ui.retainedStatements 100



spark.file.transferTo false
spark.driver.maxResultSize 4g
spark.sql.hive.metastore.jars=/spark/spark-1.4.1/hive/*


spark.eventLog.dir                hdfs://mycluster/user/spark/historylog
spark.history.fs.logDirectory     hdfs://mycluster/user/spark/historylog



spark.driver.extraClassPath=/spark/spark-1.4.1/extlib/*
spark.executor.extraClassPath=/spark/spark-1.4.1/extlib/*



spark.sql.parquet.binaryAsString true
spark.serializer        org.apache.spark.serializer.KryoSerializer
spark.kryoserializer.buffer 32
spark.kryoserializer.buffer.max 256
spark.shuffle.consolidateFiles true
spark.io.compression.codec org.apache.spark.io.LZ4CompressionCodec











------------------ ???????? ------------------
??????: "Igor Berman";<igor.ber...@gmail.com>;
????????: 2015??8??3??(??????) ????7:56
??????: "Sea"<261810...@qq.com>; 
????: "Barak Gitsis"<bar...@similarweb.com>; "Ted Yu"<yuzhih...@gmail.com>; 
"user@spark.apache.org"<user@spark.apache.org>; "rxin"<r...@databricks.com>; 
"joshrosen"<joshro...@databricks.com>; "davies"<dav...@databricks.com>; 
????: Re: About memory leak in spark 1.4.1



in general, what is your configuration? use --conf "spark.logConf=true"



we have 1.4.1 in production standalone cluster and haven't experienced what you 
are describingcan you verify in web-ui that indeed spark got your 50g per 
executor limit? I mean in configuration page..


might be you are using offheap storage(Tachyon)?




On 3 August 2015 at 04:58, Sea <261810...@qq.com> wrote:
"spark uses a lot more than heap memory, it is the expected behavior."  It 
didn't exist in spark 1.3.x
What does "a lot more than" means?  It means that I lose control of it!
I try to  apply 31g, but it still grows to 55g and continues to grow!!! That is 
the point!
I have tried set memoryFraction to 0.2??but it didn't help.
I don't know whether it will still exist in the next release 1.5, I wish not.






------------------ ???????? ------------------
??????: "Barak Gitsis";<bar...@similarweb.com>;
????????: 2015??8??2??(??????) ????9:55
??????: "Sea"<261810...@qq.com>; "Ted Yu"<yuzhih...@gmail.com>; 
????: "user@spark.apache.org"<user@spark.apache.org>; 
"rxin"<r...@databricks.com>; "joshrosen"<joshro...@databricks.com>; 
"davies"<dav...@databricks.com>; 
????: Re: About memory leak in spark 1.4.1





spark uses a lot more than heap memory, it is the expected behavior.in 1.4 
off-heap memory usage is supposed to grow in comparison to 1.3


Better use as little memory as you can for heap, and since you are not 
utilizing it already, it is safe for you to reduce it.
memoryFraction helps you optimize heap usage for your data/application profile 
while keeping it tight.



 






On Sun, Aug 2, 2015 at 12:54 PM Sea <261810...@qq.com> wrote:

spark.storage.memoryFraction is in heap memory, but my situation is that the 
memory is more than heap memory !  


Anyone else use spark 1.4.1 in production? 




------------------ ???????? ------------------
??????: "Ted Yu";<yuzhih...@gmail.com>;
????????: 2015??8??2??(??????) ????5:45
??????: "Sea"<261810...@qq.com>; 
????: "Barak Gitsis"<bar...@similarweb.com>; 
"user@spark.apache.org"<user@spark.apache.org>; "rxin"<r...@databricks.com>; 
"joshrosen"<joshro...@databricks.com>; "davies"<dav...@databricks.com>; 


????: Re: About memory leak in spark 1.4.1




http://spark.apache.org/docs/latest/tuning.html does mention 
spark.storage.memoryFraction in two places.
One is under Cache Size Tuning section.


FYI


On Sun, Aug 2, 2015 at 2:16 AM, Sea <261810...@qq.com> wrote:
Hi, Barak
    It is ok with spark 1.3.0, the problem is with spark 1.4.1.
    I don't think spark.storage.memoryFraction will make any sense, because it 
is still in heap memory. 




------------------ ???????? ------------------
??????: "Barak Gitsis";<bar...@similarweb.com>;
????????: 2015??8??2??(??????) ????4:11
??????: "Sea"<261810...@qq.com>; "user"<user@spark.apache.org>; 
????: "rxin"<r...@databricks.com>; "joshrosen"<joshro...@databricks.com>; 
"davies"<dav...@databricks.com>; 
????: Re: About memory leak in spark 1.4.1



Hi,reducing spark.storage.memoryFraction did the trick for me. Heap doesn't get 
filled because it is reserved..
My reasoning is: 
I give executor all the memory i can give it, so that makes it a boundary.
From here i try to make the best use of memory I can. storage.memoryFraction is 
in a sense user data space.  The rest can be used by the system. 
If you don't have so much data that you MUST store in memory for performance, 
better give spark more space.. 
ended up setting it to 0.3


All that said, it is on spark 1.3 on cluster


hope that helps


On Sat, Aug 1, 2015 at 5:43 PM Sea <261810...@qq.com> wrote:

Hi, all
I upgrage spark to 1.4.1, many applications failed... I find the heap memory is 
not full , but the process of CoarseGrainedExecutorBackend will take more 
memory than I expect, and it will increase as time goes on, finally more than 
max limited of the server, the worker will die.....


Any can help??


Mode??standalone


spark.executor.memory 50g


25583 xiaoju    20   0 75.5g  55g  28m S 1729.3 88.1   2172:52 java


55g more than 50g I apply



-- 

-Barak








-- 

-Barak

Reply via email to