Found something interesting when running iostat. Here is the output:

iostat 2
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.78    0.00    0.75   82.98    0.00   15.49

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda              92.00         0.00      9822.00          0      19644
sdb              88.50         0.00      8620.00          0      17240
sdc              46.00         0.00       338.00          0        676
md0               0.00         0.00         0.00          0          0
md1               0.00         0.00         0.00          0          0
md2               0.00         0.00         0.00          0          0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.53    0.00    0.63   77.87    0.00   20.97

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda              28.50         0.00      3636.00          0       7272
sdb              31.50         0.00      4838.00          0       9676
sdc              21.50         0.00       424.00          0        848
md0               0.00         0.00         0.00          0          0
md1               0.00         0.00         0.00          0          0
md2               0.00         0.00         0.00          0          0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.90    0.00    1.10   79.17    0.00   18.84

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda              96.00         0.00     20164.00          0      40328
sdb              91.50         0.00     18014.00          0      36028
sdc              49.00         0.00      7512.00          0      15024
md0               0.00         0.00         0.00          0          0
md1               0.00         0.00         0.00          0          0
md2               0.00         0.00         0.00          0          0


cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] 
[raid10] 
md2 : active raid5 sda6[2] sdc6[0] sdb6[1]
      1757813760 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      
md1 : active raid1 sda5[2] sdb5[1] sdc5[0]
      78318520 blocks super 1.2 [3/3] [UUU]
      
md0 : active raid1 sdc1[0] sda1[2] sdb1[1]
      19529656 blocks super 1.2 [3/3] [UUU]


I checked /proc/diskstats and only the sd*5 partition is being written to. 
Correct me if I'm wrong but this must mean something is writing to all 3 disks 
to the sd*5 partitions, but is not going through the md1 device.

This process correlates perfectly with the problems I have been having.
I have just watched over the last 2 hours that every time the server
becomes io-bound it is because of this process writing to the disks.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/815540

Title:
  Server becomes unresponsive after spawning 16 ksoftirqd processes

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/815540/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to