Would it be possible for you to test the latest upstream kernel? Refer
to https://wiki.ubuntu.com/KernelMainlineBuilds . Please test the latest
v3.14 kernel[0].

If this bug is fixed in the mainline kernel, please add the following
tag 'kernel-fixed-upstream'.

If the mainline kernel does not fix this bug, please add the tag:
'kernel-bug-exists-upstream'.

If you are unable to test the mainline kernel, for example it will not boot, 
please add the tag: 'kernel-unable-to-test-upstream'.
Once testing of the upstream kernel is complete, please mark this bug as 
"Confirmed".


Thanks in advance.


[0] http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.14-rc8-trusty/

** Changed in: linux (Ubuntu)
   Importance: Undecided => Medium

** Changed in: linux (Ubuntu)
       Status: Confirmed => Incomplete

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1297522

Title:
  Since Trusty /proc/diskstats shows weird values

Status in “linux” package in Ubuntu:
  Incomplete

Bug description:
  After upgrading some virtual machines (KVM) to Trusty I noticed really
  high I/O wait times, e.g. Munin graphs now show up to 200 seconds(!)
  read I/O wait time. See attached image. Of course real latency isn't
  higher than before, it's only /proc/diskstats that shows totally wrong
  numbers...

  $ cat /proc/diskstats | awk '$3=="vda" { print $7/$4, $11/$8 }'
  1375.44 13825.1

  From the documentation for /proc/diskstats field 4 is total number of
  reads completed, field 7 is the total time spent reading in
  milliseconds, and fields 8 and 11 are the same for writes. So above
  numbers are the average read and write latency in milliseconds.

  Same weird numbers with iowait. Note the column "await" (average  time
  in milliseconds for I/O requests):

  $ iostat -dx 1 60
  Linux 3.13.0-19-generic (munin)       03/25/14        _x86_64_        (2 CPU)

  Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
  vda               2.30    16.75   72.45   24.52   572.79   778.37    27.87    
 1.57  620.00  450.20 1121.83   1.71  16.54

  Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
  vda               0.00     0.00    0.00    0.00     0.00     0.00     0.00    
 0.00    0.00    0.00    0.00   0.00   0.00

  Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
  vda               0.00     0.00    0.00    0.00     0.00     0.00     0.00    
 0.00    0.00    0.00    0.00   0.00   0.00

  Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
  vda               0.00    52.00    0.00   25.00     0.00   308.00    24.64    
 0.30 27813.92    0.00 27813.92   0.48   1.20

  I upgraded the host system to Trusty too, however there
  /proc/diskstats output is normal as before.

  $ uname -r
  3.13.0-19-generic

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1297522/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to