Hi!

Excuse me for asking something somewhat off-topic: We are running some Xen PVMs 
on SLES11 SP4, and I noticed from time to time what looks like a 64-bit 
overflow in VBD_RSECT. I think I had addressed this issue once before with 
support, but without success (memory may be wrong). I'd simply like to ask 
whether anybody else sees the same problem, or knows the cause of the problem.

xentop - 10:32:07   Xen 4.4.4_02-32.1
9 domains: 2 running, 7 blocked, 0 paused, 0 crashed, 0 dying, 0 shutdown
Mem: 134207288k total, 107751836k used, 26455452k free    CPUs: 24 @ 2665MHz
      NAME  STATE   CPU(sec) CPU(%)     MEM(k) MEM(%)  MAXMEM(k) MAXMEM(%) 
VCPUS NETS   NETTX(k)   NETRX(k) VBDS   VBD_OO    VBD_RD   VBD_WR            
VBD_RSECT  VBD_WSECT SSID
  Domain-0 -----r     441702   20.3    4059648    3.0    8388608       6.3    
24    0          0          0    0        0         0        0                  
  0          0    0
  v01      --b---       5779   12.5   16777216   12.5   16777216      12.5     
3    3     183204     441060   10        0   1343768   866205             
27299164    9512070    0
  v02      --b---     426057    5.0   16777216   12.5   16777216      12.5     
2    3 1427421847 2596833032   12        0 180345597 94998020 
18446744072887605425 2752684148    0
  v03      --b---       5563    5.0   16777216   12.5   16777216      12.5     
2    3     172219     105166   10        0    142352   903116              
2811633    9184274    0
  v04      --b---        507    0.2    1048576    0.8    4194304       3.1     
2    3     171194     471196    2        0     38577    47512               
668060    1022688    0
  v05      --b---       9573    5.0   16777216   12.5   16777216      12.5     
2    3  189830790  357721748    8        0   1871939  1485228            
395244125   36548370    0
  v07      --b---       9533    8.6     524288    0.4    1048576       0.8     
2    3     186967     402317    2        0    101071  1491721              
1631180   23201114    0
  v08      -----r     267882  159.8   16777216   12.5   16777216      12.5     
2    3  872784729 2056988805   20        0  27444416 37396958 
18446744069588249289  509859372    0
  v09      --b---        183    0.1   16777216   12.5   16777216      12.5     
2    3     121525       7206   12        0      1560    13181               
132880     311216    0

While monitoring disk performance for cluster planning, it might be good to 
know which VM is doing how much I/O. However with these numbers I can hardly 
tell...

Regards,
Ulrich



_______________________________________________
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to