Disk IO on one of our hardware nodes has an unreasonably high IO use
according to iostat and iotop:

This is the output of "iostat -xzN 30"

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s
avgrq-sz avgqu-sz   await  svctm  %util
vg0-root          0.00     0.00    0.00    0.43     0.00     3.47
8.00     0.01   26.00  18.92   0.82
vg0-vz            0.00     0.00    0.70  300.50    39.47  2604.53
8.78    42.93  142.51   3.17  95.47

Here is the output from iotop:

Total DISK READ: 39.04 K/s | Total DISK WRITE: 1206.61 K/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
 1717 be/3 root        0.00 B/s  205.55 K/s  0.00 % 51.06 % [jbd2/ploop30888]
 2736 be/4 root        0.00 B/s   51.39 K/s  0.00 % 10.68 % syslogd -m 0
597591 be/4 566        13.54 K/s    0.00 B/s  0.00 %  2.83 % authProg
 2015 be/3 root        0.00 B/s   12.35 K/s  0.00 %  1.11 % [jbd2/ploop45556]
 3446 be/3 root        0.00 B/s    0.00 B/s  0.00 %  0.71 % [jbd2/ploop24055]
  610 be/3 root        0.00 B/s    0.00 B/s  0.00 %  0.38 % [jbd2/dm-0-8]
639011 be/4 47          0.00 B/s  407.91 B/s  0.00 %  0.00 % exim -bd -q60m
 2343 be/4 root        0.00 B/s 1631.66 B/s  0.00 %  0.00 % [flush-182:72889]
567695 be/4 nobody      0.00 B/s  815.83 B/s  0.00 %  0.00 % httpd -k
start -DSSL
563274 be/4 nobody      0.00 B/s  815.83 B/s  0.00 %  0.00 % httpd -k
start -DSSL

The hardware node is a rather beefy Intel(R) Core(TM) i7 CPU X 990  @
3.47GHz with 24 GB RAM and 4 SATA drives in (software) RAID 0+1

mdadm doesn't show any unusual activity:
[root@server16 ~]# cat /proc/mdstat
Personalities : [raid1] [raid10]
md0 : active raid1 sda1[0] sdd1[4] sdb1[1] sdc1[5]
      524276 blocks super 1.0 [4/4] [UUUU]

md1 : active raid10 sda2[0] sdd2[4] sdb2[1] sdc2[5]
      2929223680 blocks super 1.0 512K chunks 2 near-copies [4/4] [UUUU]

unused devices: <none>

The server only has 3 containers, two of which are fairly quiet, the
busiest is a cpanel server with 170 websites. Loadavg is typically
around 1.xx, cpu use minimal, the same with memory use.

top - 07:14:25 up 9 days,  2:10,  3 users,  load average: 3.43, 3.02, 3.02
Tasks: 553 total,   4 running, 542 sleeping,   0 stopped,   7 zombie
Cpu(s):  6.6%us,  0.8%sy,  0.2%ni, 88.8%id,  3.5%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  24544856k total, 23808120k used,   736736k free,  1840000k buffers
Swap:  6291448k total,   336684k used,  5954764k free, 18275304k cached


      CTID       LAVERAGE
      1602 1.90/1.85/1.83
      1603 0.52/0.28/0.19
      1609 0.00/0.01/0.01

So the questions are

1)  Why does iostat show such a high %util when iotop doesn't show any
particular load?
2)  I note that the highest processes in iotop are [jbd2/ploopXXXX]
processes - what actually happens in there? I presume this would be
the internal IO activity in containers but why is it shown as the
ploop's IO and not the IO of the individual container processes?
3) Can this be improved somehow?
_______________________________________________
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users

Reply via email to