On Thu, 22 Mar 2012, Andrew Vagin wrote:

> Which kernel do you use?
042stab049.6

> Could you show content of /proc/bc/CTID/resources?

 Well, yes and no, I can't do anything on HN,
but I can show you  cat /proc/user_beancounters:
(this is from the guest that caused all that resource shortage)
Version: 2.5
       uid  resource                     held              maxheld              
barrier                limit              failcnt
   178032:  kmemsize                 34481819             36954112          
17179869184          19327352832                    0
            lockedpages                     0                    0  
9223372036854775807  9223372036854775807                    0
            privvmpages               3360282              5017649              
4194304              4718592                    0
            shmpages                     5451                 5467  
9223372036854775807  9223372036854775807                    0
            dummy                           0                    0              
      0                    0                    0
            numproc                       505                  693  
9223372036854775807  9223372036854775807                    0
            physpages                  766009               942952              
      0              3932160                    0
            vmguarpages                     0                    0  
9223372036854775807  9223372036854775807                    0
            oomguarpages               620184               772457  
9223372036854775807  9223372036854775807                    0
            numtcpsock                    120                  214  
9223372036854775807  9223372036854775807                    0
            numflock                       10                   13  
9223372036854775807  9223372036854775807                    0
            numpty                         12                   12  
9223372036854775807  9223372036854775807                    0
            numsiginfo                      0                   45  
9223372036854775807  9223372036854775807                    0
            tcpsndbuf                 2498464              3850016          
17179869184          19327352832                    0
            tcprcvbuf                 2499360              6331520          
17179869184          19327352832                    0
            othersockbuf                 9344                35296  
9223372036854775807  9223372036854775807                    0
            dgramrcvbuf                     0                 8768  
9223372036854775807  9223372036854775807                    0
            numothersock                   72                   76  
9223372036854775807  9223372036854775807                    0
            dcachesize               10880246             11060512          
17179869184          19327352832                    0
            numfile                      1488                 1660  
9223372036854775807  9223372036854775807                    0
            dummy                           0                    0              
      0                    0                    0
            dummy                           0                    0              
      0                    0                    0
            dummy                           0                    0              
      0                    0                    0
            numiptent                      10                   10  
9223372036854775807  9223372036854775807                    0

(as you can see I tried limiting some of the resources, when it
was first noticed, everything was unlimited:unlimited
except physpages which is at 16G, and swap which was set to 0.)
 When the hornetq is stopped inside container, HN returns back to life,
but untill then, even things like bash completion fails with:

xmalloc: ../bash/make_cmd.c:100: cannot allocate 519 bytes (2076672 bytes 
allocated)

regards, Eyck
-- 
Key fingerprint = 40D0 9FFB 9939 7320 8294  05E0 BCC7 02C4 75CC 50D9
 Total Existance Failure
_______________________________________________
Users mailing list
[email protected]
https://openvz.org/mailman/listinfo/users

Reply via email to