Yes, It seems like that CMS is better. I have tried G1 as databricks' blog 
recommended, but it's too slow.




------------------ ???????? ------------------
??????: "condor join";<spark_ker...@outlook.com>;
????????: 2016??5??30??(??????) ????10:17
??????: "Ted Yu"<yuzhih...@gmail.com>; 
????: "user@spark.apache.org"<user@spark.apache.org>; 
????: ????: G1 GC takes too much time



  The follwing are the parameters:
 -XX:+UseG1GC     -XX:+UnlockDiagnostivVMOptions -XX:G1SummarizeConcMark
 -XX:InitiatingHeapOccupancyPercent=35
 
 spark.executor.memory=4G
 
 
 
 
 ??????: Ted Yu <yuzhih...@gmail.com>
 ????????: 2016??5??30?? 9:47:05
 ??????: condor join
 ????: user@spark.apache.org
 ????: Re: G1 GC takes too much time  
 
  bq. It happens during the Reduce majority. 
 
 Did the above refer to reduce operation ?
 
 
 Can you share your G1GC parameters (and heap size for workers) ?
 
 
 Thanks
 
 
 On Sun, May 29, 2016 at 6:15 PM, condor join  <spark_ker...@outlook.com> wrote:
    Hi, my spark application failed due to take too much time during GC. 
Looking at the logs I found these things:
 1.there are Young GC takes too much time,and not found Full GC happen this;
 2.The time takes too much during the object copy;
 3.It happened  more easily when there were not enough resources;
 4.It happens during the Reduce majority.
 
 
 have anyone met the same question?
 thanks
 
 
 
 
 
 
 ---------------------------------------------------------------------
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to