Thanks.

I went back to hbase 0.89 with 0.1 LZO, which works fine and does not show this 
issue.

I tried with a newer Hbase and LZO version, also with the MALLOC... setting but 
without max direct memory set, so I was wondering whether you need a 
combination of the two to fix things (apparently not).

Now i am wondering whether I did something wrong setting the env var. It should 
just be picked up when it's in hbase-env.sh, right?


Friso



On 12 jan 2011, at 10:59, Andrey Stepachev wrote:

> with MALLOC_ARENA_MAX=2
> 
> I check -XX:MaxDirectMemorySize=256m, before, but it doesn't affect anything
> (even no OOM
> exceptions or so on).
> 
> But it looks like i have exactly the same issue (it looks like). I have many
> 64Mb anon memory blocks.
> (sometimes they 132MB). And on heavy load i have rapidly growing rss size of
> jvm process.
> 
> 2011/1/12 Friso van Vollenhoven <[email protected]>
> 
>> Just to clarify: you fixed it by setting the MALLOC_MAX_ARENA=? in
>> hbase-env.sh?
>> 
>> Did you also use the -XX:MaxDirectMemorySize=256m ?
>> 
>> It would be nice to check that this is a different than the leakage with
>> LZO...
>> 
>> 
>> Thanks,
>> Friso
>> 
>> 
>> On 12 jan 2011, at 07:46, Andrey Stepachev wrote:
>> 
>>> My bad. All things work. Thanks for  Todd Lipcon :)
>>> 
>>> 2011/1/11 Andrey Stepachev <[email protected]>
>>> 
>>>> I tried to set MALLOC_ARENA_MAX=2. But still the same issue like in LZO
>>>> problem thread. All those 65M blocks here. And JVM continues to eat
>> memory
>>>> on heavy write load. And yes, I use "improved" kernel
>>>> Linux 2.6.34.7-0.5.
>>>> 
>>>> 2011/1/11 Xavier Stevens <[email protected]>
>>>> 
>>>> Are you using a newer linux kernel with the new and "improved" memory
>>>>> allocator?
>>>>> 
>>>>> If so try setting this in hadoop-env.sh:
>>>>> 
>>>>> export MALLOC_ARENA_MAX=<number of cores you want to use>
>>>>> 
>>>>> Maybe start by setting it to 4.  You can thank Todd Lipcon if this
>> works
>>>>> for you.
>>>>> 
>>>>> Cheers,
>>>>> 
>>>>> 
>>>>> -Xavier
>>>>> 
>>>>> On 1/11/11 7:24 AM, Andrey Stepachev wrote:
>>>>>> No. I don't use LZO. I tried even remove any native support (i.e. all
>>>>> .so
>>>>>> from class path)
>>>>>> and use java gzip. But nothing.
>>>>>> 
>>>>>> 
>>>>>> 2011/1/11 Friso van Vollenhoven <[email protected]>
>>>>>> 
>>>>>>> Are you using LZO by any chance? If so, which version?
>>>>>>> 
>>>>>>> Friso
>>>>>>> 
>>>>>>> 
>>>>>>> On 11 jan 2011, at 15:57, Andrey Stepachev wrote:
>>>>>>> 
>>>>>>>> After starting the hbase in jroсkit found the same memory leakage.
>>>>>>>> 
>>>>>>>> After the launch
>>>>>>>> 
>>>>>>>> Every 2,0 s: date & & ps - sort =- rss-eopid, rss, vsz, pcpu | head
>>>>>>>> Tue Jan 11 16:49:31 2011
>>>>>>>> 
>>>>>>>> 11 16:49:31 MSK 2011
>>>>>>>> PID RSS VSZ% CPU
>>>>>>>> 7863 2547760 5576744 78.7
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> JR dumps:
>>>>>>>> 
>>>>>>>> Total mapped 5576740KB (reserved = 2676404KB) - Java heap 2048000KB
>>>>>>>> (reserved = 1472176KB) - GC tables 68512KB - Thread stacks 37236KB
>> (#
>>>>>>>> threads = 111) - Compiled code 1048576KB (used = 2599KB) - Internal
>>>>>>>> 1224KB - OS 549688KB - Other 1800976KB - Classblocks 1280KB
>> (malloced
>>>>>>>> = 1110KB # 3285) - Java class data 20224KB (malloced = 20002KB #
>> 15134
>>>>>>>> in 3285 classes) - Native memory tracking 1024KB (malloced = 325KB
>> +10
>>>>>>>> KB # 20)
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> After running the mr which make high write load (~1hour)
>>>>>>>> 
>>>>>>>> Every 2,0 s: date & & ps - sort =- rss-eopid, rss, vsz, pcpu | head
>>>>>>>> Tue Jan 11 17:08:56 2011
>>>>>>>> 
>>>>>>>> 11 17:08:56 MSK 2011
>>>>>>>> PID RSS VSZ% CPU
>>>>>>>> 7863 4072396 5459572 100
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> JR said not important below specify why)
>>>>>>>> 
>>>>>>>> http://paste.ubuntu.com/552820/
>>>>>>>> <http://paste.ubuntu.com/552820/>
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 7863:
>>>>>>>> Total mapped                  5742628KB +165888KB
>> (reserved=1144000KB
>>>>>>>> -1532404KB)
>>>>>>>> -              Java heap      2048000KB           (reserved=0KB
>>>>>>> -1472176KB)
>>>>>>>> -              GC tables        68512KB
>>>>>>>> -          Thread stacks        38028KB    +792KB (#threads=114 +3)
>>>>>>>> -          Compiled code      1048576KB           (used=3376KB
>> +776KB)
>>>>>>>> -               Internal         1480KB    +256KB
>>>>>>>> -                     OS       517944KB  -31744KB
>>>>>>>> -                  Other      1996792KB +195816KB
>>>>>>>> -            Classblocks         1280KB           (malloced=1156KB
>>>>>>>> +45KB #3421 +136)
>>>>>>>> -        Java class data        20992KB    +768KB (malloced=20843KB
>>>>>>>> +840KB #15774 +640 in 3421 classes)
>>>>>>>> - Native memory tracking         1024KB           (malloced=325KB
>>>>> +10KB
>>>>>>> #20)
>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>> 
>> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>>>>>>>>  OS                          *java    r x 0x0000000000400000.(
>>>>>>> 76KB)
>>>>>>>>  OS                          *java    rw  0x0000000000612000 (
>>>>>>> 4KB)
>>>>>>>>  OS                        *[heap]    rw  0x0000000000613000.(
>>>>>>> 478712KB)
>>>>>>>> INT                           Poll    r   0x000000007fffe000 (
>>>>>>> 4KB)
>>>>>>>> INT                         Membar    rw  0x000000007ffff000.(
>>>>>>> 4KB)
>>>>>>>> MSP              Classblocks (1/2)    rw  0x0000000082ec0000 (
>>>>>>> 768KB)
>>>>>>>> MSP              Classblocks (2/2)    rw  0x0000000082f80000 (
>>>>>>> 512KB)
>>>>>>>> HEAP                      Java heap    rw
>>>>>>> 0x0000000083000000.(2048000KB)
>>>>>>>>                                       rw  0x00007f2574000000 (
>>>>>>> 65500KB)
>>>>>>>>                                           0x00007f2577ff7000.(
>>>>>>> 36KB)
>>>>>>>>                                       rw  0x00007f2584000000 (
>>>>>>> 65492KB)
>>>>>>>>                                           0x00007f2587ff5000.(
>>>>>>> 44KB)
>>>>>>>>                                       rw  0x00007f258c000000 (
>>>>>>> 65500KB)
>>>>>>>>                                           0x00007f258fff7000 (
>>>>>>> 36KB)
>>>>>>>>                                       rw  0x00007f2590000000 (
>>>>>>> 65500KB)
>>>>>>>>                                           0x00007f2593ff7000 (
>>>>>>> 36KB)
>>>>>>>>                                       rw  0x00007f2594000000 (
>>>>>>> 65500KB)
>>>>>>>>                                           0x00007f2597ff7000 (
>>>>>>> 36KB)
>>>>>>>>                                       rw  0x00007f2598000000 (
>>>>>>> 131036KB)
>>>>>>>>                                           0x00007f259fff7000 (
>>>>>>> 36KB)
>>>>>>>>                                       rw  0x00007f25a0000000 (
>>>>>>> 65528KB)
>>>>>>>>                                           0x00007f25a3ffe000 (
>>>>>>> 8KB)
>>>>>>>>                                       rw  0x00007f25a4000000 (
>>>>>>> 65496KB)
>>>>>>>>                                           0x00007f25a7ff6000 (
>>>>>>> 40KB)
>>>>>>>>                                       rw  0x00007f25a8000000 (
>>>>>>> 65496KB)
>>>>>>>>                                           0x00007f25abff6000 (
>>>>>>> 40KB)
>>>>>>>>                                       rw  0x00007f25ac000000 (
>>>>>>> 65504KB)
>>>>>>>> 
>>>>>>>> 
>>>>>>>> So, the difference was in the pieces of memory like this:
>>>>>>>> 
>>>>>>>> rw 0x00007f2590000000 (65500KB)
>>>>>>>>   0x00007f2593ff7000 (36KB)
>>>>>>>> 
>>>>>>>> 
>>>>>>>> Looks like HLog allocates memory (looks like HLog, becase it is very
>>>>>>> similar
>>>>>>>> size)
>>>>>>>> 
>>>>>>>> If we count this blocks we get amount of lost memory:
>>>>>>>> 
>>>>>>>> 65M * 32 + 132M = 2212M
>>>>>>>> 
>>>>>>>> So, it looks like HLog allcates to many memory, and question is: how
>>>>> to
>>>>>>>> restrict it?
>>>>>>>> 
>>>>>>>> 2010/12/30 Andrey Stepachev <[email protected]>
>>>>>>>> 
>>>>>>>>> Hi All.
>>>>>>>>> 
>>>>>>>>> After heavy load into hbase (single node, nondistributed test
>> system)
>>>>> I
>>>>>>> got
>>>>>>>>> 4Gb process size of my HBase java process.
>>>>>>>>> On 6GB machine there was no room for anything else (disk cache and
>> so
>>>>>>> on).
>>>>>>>>> Does anybody knows, what is going on, and how you solve this. What
>>>>> heap
>>>>>>>>> memory is set on you hosts
>>>>>>>>> and how much of RSS hbase process actually use.
>>>>>>>>> 
>>>>>>>>> I don't see such things before, all tomcat and other java apps
>> don't
>>>>>>> eats
>>>>>>>>> significally more memory then -Xmx.
>>>>>>>>> 
>>>>>>>>> Connection name:   pid: 23476
>> org.apache.hadoop.hbase.master.HMaster
>>>>>>>>> start   Virtual Machine:   Java HotSpot(TM) 64-Bit Server VM
>> version
>>>>>>>>> 17.1-b03   Vendor:   Sun Microsystems Inc.   Name:   23...@mars
>>>>>>> Uptime:   12
>>>>>>>>> hours 4 minutes   Process CPU time:   5 hours 45 minutes   JIT
>>>>> compiler:
>>>>>>> HotSpot
>>>>>>>>> 64-Bit Server Compiler   Total compile time:   19,223 seconds
>>>>>>>>> ------------------------------
>>>>>>>>>  Current heap size:     703 903 kbytes   Maximum heap size:   2
>> 030
>>>>>>> 976kbytes    Committed memory:
>>>>>>>>> 2 030 976 kbytes   Pending finalization:   0 objects      Garbage
>>>>>>>>> collector:   Name = 'ParNew', Collections = 9 990, Total time spent
>> =
>>>>> 5
>>>>>>>>> minutes   Garbage collector:   Name = 'ConcurrentMarkSweep',
>>>>> Collections
>>>>>>> =
>>>>>>>>> 20, Total time spent = 35,754 seconds
>>>>>>>>> ------------------------------
>>>>>>>>>  Operating System:   Linux 2.6.34.7-0.5-xen   Architecture:
>> amd64
>>>>>>> Number of processors:
>>>>>>>>> 8   Committed virtual memory:   4 403 512 kbytes     Total physical
>>>>>>>>> memory:   6 815 744 kbytes   Free physical memory:      82 720
>> kbytes
>>>>>>> Total swap space:
>>>>>>>>> 8 393 924 kbytes   Free swap space:   8 050 880 kbytes
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>> 
>>>>> 
>>>> 
>>>> 
>> 
>> 

Reply via email to