1)Long GC pause should has detailed reson, like InitialMark/Remark、Full GC
due to Concurrent Mode failure/Promotion failure。Please check it

2)CMS is nice and steady for our production. Please troubleshooting case by
case. G1 is just difficult to perceive.

3)You shouldupdate to JDK8 ,for one case,ConcurrentInitialMark is disabled
as default or just only can be sequenced in JDK7.



2017-11-29 15:14 GMT+08:00 Jörn Franke <jornfra...@gmail.com>:

> I also recommend it you will have also performance improvements with JDK8
> in general (use the latest version).
> Keep also in mind that more and more big data libraries etc will drop JDK7
> support soon (Aside that JDK7 is anyway not maintained anymore).
>
> On 29. Nov 2017, at 01:31, Johannes Alberti <johan...@altiscale.com>
> wrote:
>
> Yes, I would recommend to go to Java 8 and give it a shot with G1 and
> report back :)
>
> Sent from my iPhone
>
> On Nov 28, 2017, at 3:30 PM, Sharanya Santhanam <ssanthan....@gmail.com>
> wrote:
>
> HI Johannes ,
>
> We are running on Java version jdk1.7.0_67 . We are using
> ConcurrentMarkAndSweep.  Would you recommend using G1GC ?
>
>
> These are our current settings
>
> -XX:NewRatio=8 -XX:+UseParNewGC -XX:-UseGCOverheadLimit -XX:PermSize=256m
> -Xloggc:<> -XX:HeapDumpPath=oom -XX:+PrintGCDetails -XX:+PrintGCDateStamps
> -XX:ErrorFile=<log>/oom/hs2jvmerror%p.log -XX:+UseGCLogFileRotation
> -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=128M
> -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled 
> -XX:+HeapDumpOnOutOfMemoryError
> *-XX:+UseConcMarkSweepGC* -XX:+CMSParallelRemarkEnabled
> -XX:MaxPermSize=1024m *-Xmx69427m* -Xms128m -XX:MaxHeapFreeRatio=30
> -XX:MinHeapFreeRatio=10 -XX:+UseParNewGC -XX:-UseGCOverheadLimit
> -XX:PermSize=256m
>
>
> Thanks ,
> Sharanya
>
> On Tue, Nov 28, 2017 at 2:19 PM, Johannes Alberti <johan...@altiscale.com>
> wrote:
>
>> Hi Sharanya,
>>
>> Can you share your current GC settings and Java version. Are you using
>> Java 8/9 w/ G1 already?
>>
>> Regards,
>>
>> Johannes
>>
>> Sent from my iPhone
>>
>> On Nov 28, 2017, at 12:57 PM, Sharanya Santhanam <ssanthan....@gmail.com>
>> wrote:
>>
>> Hello ,
>>
>> I am currently trying to upgrade hive version on our prod clusters form
>> V1.2 to v2.1
>> We also want to adopt HS2 on the new upgraded cluster. Earlier all
>> queries were submitted via Hive cli.
>>
>> Would like to understand how large a single HS2 Heap size can be ? And is
>> there any formula to figure out the how many concurrent sessions I can
>> support with this particular heap setting?
>>
>>
>> We currently have a upper limit of supporting 300 concurrent sessions (
>> hive.server2.thrift.max.worker.threads=300). Based on this we set the
>> max  heap size to 70 GB , but seeing many long GC pauses.
>>
>>
>> Would like to understand what is the industry standard for max HS2 Heap
>> size. Are there any recommendations on what JMV GC setting work best for
>> supporting  such high number of concurrent sessions?
>>
>> Thanks,
>> Sharanya
>>
>>
>


-- 
王海华

Reply via email to