I see. The native map was enabled already. I think I understand better now how Accumulo uses my memory. So I increased data cache to 4G and index cache to 16G as memory is not a problem (the machines all have 128G per node...and runs other hadoop tasks)
Jianshi On Thu, Jun 19, 2014 at 1:13 AM, Eric Newton <[email protected]> wrote: > Yes... keeping MaxNewSize small reduces the time to collect the New > Generation, which is a stop-the-world gc. > > 32G max jvm runtime is probably excessive if you are using the native map > (since it doesn't take up JVM memory). > > Check the gc lines in your tserver debug log to see how much of the JVM > memory you are actually using. > > -Eric > > > On Wed, Jun 18, 2014 at 1:04 PM, Jianshi Huang <[email protected]> > wrote: > >> I see. thank you Josh and Eric. >> >> BTW, here's my current JVM memory settings: -Xmx32g -Xms4g -XX:NewSize=2G >> -XX:MaxNewSize=2G (Xmx < 32g for enabling CompressedOops by default) >> >> Is 2G good enough for MaxNewSize? >> >> >> Cheers, >> Jianshi >> >> >> On Thu, Jun 19, 2014 at 12:54 AM, Eric Newton <[email protected]> >> wrote: >> >>> >>> On Wed, Jun 18, 2014 at 12:51 PM, Jianshi Huang <[email protected] >>> > wrote: >>> >>>> Oh, this memory size: >>>> >>>> tserver.memory.maps.max >>>> 1G -> 20G (looks like this is an overkill, is it?) >>>> >>> >>> Probably. If you have a spare 20G, though... :-) >>> >>> >>>> >>>> tserver.cache.data.size >>>> 128M? -> 1024M >>>> >>>> tserver.cache.index.size >>>> 128M? -> 1024M >>>> >>> >>> These will help with query, not ingest. >>> >>> >>> >> >> >> -- >> Jianshi Huang >> >> LinkedIn: jianshi >> Twitter: @jshuang >> Github & Blog: http://huangjs.github.com/ >> > > -- Jianshi Huang LinkedIn: jianshi Twitter: @jshuang Github & Blog: http://huangjs.github.com/
