On 01/23/2010 02:15 AM, Antoine Martin wrote:
On 01/23/2010 01:28 AM, Antoine Martin wrote:
On 01/22/2010 02:57 PM, Michael Tokarev wrote:
Antoine Martin wrote:
I've tried various guests, including most recent Fedora12 kernels,
custom 2.6.32.x
All of them hang around the same point (~1GB written) when I do heavy IO
write inside the guest.
[]
Host is running: 2.6.31.4
QEMU PC emulator version 0.10.50 (qemu-kvm-devel-88)
Please update to last version and repeat.  kvm-88 is ancient and
_lots_ of stuff fixed and changed since that time, I doubt anyone
here will try to dig into kvm-88 problems.

Current kvm is qemu-kvm-0.12.2, released yesterday.
Sorry about that, I didn't realize 88 was so far behind.
Upgrading to qemu-kvm-0.12.2 did solve my IO problems.
Only for a while. Same problem just re-occurred, only this time it went a little further.
It is now just sitting there, with a load average of exactly 3.0 (+- 5%)

Here is a good trace of the symptom during writeback, you can see it write the data at around 50MB/s, it goes from being idle to sys, but after a while it just stops writing and goes into mostly wait state:
----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
  1   0  99   0   0   0|   0     0 | 198B  614B|   0     0 |  36    17
  1   0  99   0   0   0|   0     0 | 198B  710B|   0     0 |  31    17
  1   1  98   0   0   0|   0   128k| 240B  720B|   0     0 |  39    26
  1   1  98   0   0   0|   0     0 | 132B  564B|   0     0 |  31    14
  1   0  99   0   0   0|   0     0 | 132B  468B|   0     0 |  31    14
  1   1  98   0   0   0|   0     0 |  66B  354B|   0     0 |  30    13
  0   4  11  85   0   0| 852k    0 | 444B 1194B|   0     0 | 215   477
  2   2   0  96   0   0| 500k    0 | 132B  756B|   0     0 | 169   458
  3  57   0  39   1   0| 228k   10M| 132B  692B|   0     0 | 476  5387
  6  94   0   0   0   0|  28k   23M| 132B  884B|   0     0 | 373  2142
  6  89   0   2   2   0|  40k   38M|  66B  692B|   0  8192B| 502  5651
  4  47   0  48   0   0| 140k   34M| 132B  836B|   0     0 | 605  1664
  3  64   0  30   2   0|  60k   50M| 132B  370B|   0    60k| 750   631
  4  59   0  35   2   0|  48k   45M| 132B  836B|   0    28k| 708  1293
  7  81   0  10   2   0|  68k   67M| 132B  788B|   0   124k| 928  1634
  5  74   0  20   1   0|  48k   48M| 132B  756B|   0   316k| 830  5715
  5  70   0  24   1   0| 168k   48M| 132B  676B|   0   100k| 734  5325
  4  70   0  24   1   0|  72k   49M| 132B  948B|   0    88k| 776  3784
  5  57   0  37   1   0|  36k   37M| 132B  996B|   0   480k| 602   369
  2  21   0  77   0   0|  36k   23M| 132B  724B|   0    72k| 318  1033
  4  51   0  43   2   0| 112k   43M| 132B  756B|   0   112k| 681   909
  5  55   0  40   0   0|  88k   48M| 140B  926B|  16k   12k| 698   557
----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai hiq siq| read  writ| recv  send|  in   out | int   csw
  3  45   0  51   1   0|2248k   29M| 198B 1028B|  28k   44k| 681  5468
  1  21   0  78   0   0|  92k   17M|1275B 2049B|  92k   52k| 328  1883
  3  30   0  66   1   0| 288k   28M| 498B 2116B|   0    40k| 455   679
  1   1   0  98   0   0|4096B    0 | 394B 1340B|4096B    0 |  41    19
  1   1   0  98   0   0| 148k   52k| 881B 1592B|4096B   44k|  75    61
  1   2   0  97   0   0|1408k    0 | 351B 1727B|   0     0 | 110   109
  2   1   0  97   0   0|8192B    0 |1422B 1940B|   0     0 |  53    34
  1   0   0  99   0   0|4096B   12k| 328B 1018B|   0     0 |  41    24
  1   4   0  95   0   0| 340k    0 |3075B 2152B|4096B    0 | 153   191
  4   7   0  89   0   0|1004k   44k|1526B 1906B|   0     0 | 254   244
  0   1   0  99   0   0|  76k    0 | 708B 1708B|   0     0 |  67    57
  1   1   0  98   0   0|   0     0 | 174B  702B|   0     0 |  32    14
  1   1   0  98   0   0|   0     0 | 132B  354B|   0     0 |  32    11
  1   0   0  99   0   0|   0     0 | 132B  468B|   0     0 |  32    16
  1   0   0  99   0   0|   0     0 | 132B  468B|   0     0 |  32    14
  1   1   0  98   0   0|   0    52k| 132B  678B|   0     0 |  41    27
  1   0   0  99   0   0|   0     0 | 198B  678B|   0     0 |  35    17
  1   1   0  98   0   0|   0     0 | 198B  468B|   0     0 |  34    14
  1   0   0  99   0   0|   0     0 |  66B  354B|   0     0 |  28    11
  1   0   0  99   0   0|   0     0 |  66B  354B|   0     0 |  28     9
  1   1   0  98   0   0|   0     0 | 132B  468B|   0     0 |  34    16
  1   0   0  98   0   1|   0     0 |  66B  354B|   0     0 |  30    11
  1   1   0  98   0   0|   0     0 |  66B  354B|   0     0 |  29    11
From that point onwards, nothing will happen.
The host has disk IO to spare... So what is it waiting for??
Moved to an AMD64 host. No effect.
Disabled swap before running the test. No effect.
Moved the guest to a fully up-to-date FC12 server (2.6.31.6-145.fc12.x86_64), no effect.

I am still seeing traces like these in dmesg (various length, but always ending in sync_page):

[ 2401.350143] INFO: task perl:29512 blocked for more than 120 seconds.
[ 2401.350150] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 2401.350156] perl D ffffffff81543490 0 29512 29510 0x00000000 [ 2401.350167] ffff88000fb4a2e0 0000000000000082 ffff88000f97c058 ffff88000f97c018 [ 2401.350177] ffffffff81808e00 ffff88000a027fd8 ffff88000a027fd8 ffff88000a027fd8 [ 2401.350185] ffff88000a027588 ffff88000fb4a2e0 ffff880001af4cf0 ffffffff8105a7da
[ 2401.350193] Call Trace:
[ 2401.350210]  [<ffffffff8105a7da>] ? sync_page+0x0/0x45
[ 2401.350220]  [<ffffffff8150a4e0>] ? io_schedule+0x1f/0x32
[ 2401.350228]  [<ffffffff8105a818>] ? sync_page+0x3e/0x45
[ 2401.350235]  [<ffffffff8150a6f4>] ? __wait_on_bit+0x3e/0x71
[ 2401.350245]  [<ffffffff812b50ce>] ? submit_bio+0xa5/0xc1
[ 2401.350252]  [<ffffffff8105a960>] ? wait_on_page_bit+0x69/0x6f
[ 2401.350263]  [<ffffffff8103732d>] ? wake_bit_function+0x0/0x33
[ 2401.350270]  [<ffffffff81062a03>] ? pageout+0x193/0x1dd
[ 2401.350277]  [<ffffffff81063179>] ? shrink_page_list+0x23a/0x43b
[ 2401.350284]  [<ffffffff8150a648>] ? schedule_timeout+0x9e/0xb8
[ 2401.350293]  [<ffffffff8102e58e>] ? process_timeout+0x0/0xd
[ 2401.350300]  [<ffffffff81509f07>] ? io_schedule_timeout+0x1f/0x32
[ 2401.350310]  [<ffffffff8106855a>] ? congestion_wait+0x7b/0x89
[ 2401.350318]  [<ffffffff81037303>] ? autoremove_wake_function+0x0/0x2a
[ 2401.350326]  [<ffffffff8106383c>] ? shrink_inactive_list+0x4c2/0x6f6
[ 2401.350336]  [<ffffffff8112ca0f>] ? ext4_ext_find_extent+0x47/0x267
[ 2401.350362]  [<ffffffff81063d00>] ? shrink_zone+0x290/0x354
[ 2401.350369]  [<ffffffff81063ee9>] ? shrink_slab+0x125/0x137
[ 2401.350377]  [<ffffffff810645a1>] ? try_to_free_pages+0x1a0/0x2b2
[ 2401.350384]  [<ffffffff810621c7>] ? isolate_pages_global+0x0/0x23b
[ 2401.350393]  [<ffffffff8105f1e2>] ? __alloc_pages_nodemask+0x399/0x566
[ 2401.350403]  [<ffffffff810807fa>] ? __slab_alloc+0x121/0x448
[ 2401.350410]  [<ffffffff81125628>] ? ext4_alloc_inode+0x19/0xde
[ 2401.350418]  [<ffffffff81080c4e>] ? kmem_cache_alloc+0x46/0x88
[ 2401.350425]  [<ffffffff81125628>] ? ext4_alloc_inode+0x19/0xde
[ 2401.350433]  [<ffffffff81097086>] ? alloc_inode+0x17/0x77
[ 2401.350441]  [<ffffffff81097a83>] ? iget_locked+0x44/0x10f
[ 2401.350449]  [<ffffffff8111af18>] ? ext4_iget+0x24/0x6bc
[ 2401.350455]  [<ffffffff8112377a>] ? ext4_lookup+0x84/0xe3
[ 2401.350464]  [<ffffffff8108d386>] ? do_lookup+0xc6/0x15c
[ 2401.350472]  [<ffffffff8108dd0a>] ? __link_path_walk+0x4cb/0x605
[ 2401.350481]  [<ffffffff8108dfaf>] ? path_walk+0x44/0x8a
[ 2401.350488]  [<ffffffff8108e2e7>] ? path_init+0x94/0x113
[ 2401.350496]  [<ffffffff8108e3b1>] ? do_path_lookup+0x20/0x84
[ 2401.350502]  [<ffffffff81090843>] ? user_path_at+0x46/0x78
[ 2401.350509]  [<ffffffff810921e2>] ? filldir+0x0/0x1b0
[ 2401.350516]  [<ffffffff81029257>] ? current_fs_time+0x1e/0x24
[ 2401.350524]  [<ffffffff81088b83>] ? cp_new_stat+0x148/0x15e
[ 2401.350531]  [<ffffffff81088dbc>] ? vfs_fstatat+0x2e/0x5b
[ 2401.350538]  [<ffffffff81088e44>] ? sys_newlstat+0x11/0x2d
[ 2401.350546]  [<ffffffff8100203f>] ? system_call_fastpath+0x16/0x1b


Has anyone tried reproducing the example I posted? (loop mount a disk image and fill it with "dd if=/dev/zero" in the guest) Can anyone suggest a way forward? (as I have already updated kvm and the kernels)

Thanks
Antoine


QEMU PC emulator version 0.12.2 (qemu-kvm-0.12.2), Copyright (c) 2003-2008 Fabrice Bellard
Guests: various, all recent kernels.
Host: 2.6.31.4

Please advise.

Thanks
Antoine


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to