Woo, 128G. That's awesome.

This thread reminded me that I wanted to write up a blog post on how to configure memory for use with Accumulo. Any findings that you come across in practice would be great if you could share them.

On 6/18/14, 10:25 AM, Jianshi Huang wrote:
Just want to correct that -Xmx32g won't enable compressed pointers. Set
it to -Xmx31g

Jianshi


On Thu, Jun 19, 2014 at 1:20 AM, Jianshi Huang <[email protected]
<mailto:[email protected]>> wrote:

    I see. The native map was enabled already.

    I think I understand better now how Accumulo uses my memory. So I
    increased data cache to 4G and index cache to 16G as memory is not a
    problem (the machines all have 128G per node...and runs other hadoop
    tasks)

    Jianshi



    On Thu, Jun 19, 2014 at 1:13 AM, Eric Newton <[email protected]
    <mailto:[email protected]>> wrote:

        Yes... keeping MaxNewSize small reduces the time to collect the
        New Generation, which is a stop-the-world gc.

        32G max jvm runtime is probably excessive if you are using the
        native map (since it doesn't take up JVM memory).

        Check the gc lines in your tserver debug log to see how much of
        the JVM memory you are actually using.

        -Eric


        On Wed, Jun 18, 2014 at 1:04 PM, Jianshi Huang
        <[email protected] <mailto:[email protected]>> wrote:

            I see. thank you Josh and Eric.

            BTW, here's my current JVM memory settings: -Xmx32g -Xms4g
            -XX:NewSize=2G -XX:MaxNewSize=2G (Xmx < 32g for enabling
            CompressedOops by default)

            Is 2G good enough for MaxNewSize?


            Cheers,
            Jianshi


            On Thu, Jun 19, 2014 at 12:54 AM, Eric Newton
            <[email protected] <mailto:[email protected]>> wrote:


                On Wed, Jun 18, 2014 at 12:51 PM, Jianshi Huang
                <[email protected]
                <mailto:[email protected]>> wrote:

                    Oh, this memory size:

                    tserver.memory.maps.max
                    1G -> 20G (looks like this is an overkill, is it?)


                Probably.  If you have a spare 20G, though... :-)


                    tserver.cache.data.size
                    128M? -> 1024M

                    tserver.cache.index.size
                    128M? -> 1024M


                These will help with query, not ingest.





            --
            Jianshi Huang

            LinkedIn: jianshi
            Twitter: @jshuang
            Github & Blog: http://huangjs.github.com/





    --
    Jianshi Huang

    LinkedIn: jianshi
    Twitter: @jshuang
    Github & Blog: http://huangjs.github.com/




--
Jianshi Huang

LinkedIn: jianshi
Twitter: @jshuang
Github & Blog: http://huangjs.github.com/

Reply via email to