On 05/24/2010 12:47 AM, Stefan Hajnoczi wrote:
On Sun, May 23, 2010 at 5:18 PM, Antoine Martin anto...@nagafix.co.uk wrote:
Why does it work in a chroot for the other options (aio=native, if=ide, etc)
but not for aio!=native??
Looks like I am misunderstanding the semantics of chroot...
It
On Sat, May 29, 2010 at 10:42 AM, Antoine Martin anto...@nagafix.co.uk wrote:
Can someone explain the aio options?
All I can find is this:
# qemu-system-x86_64 -h | grep -i aio
[,addr=A][,id=name][,aio=threads|native]
I assume it means the aio=threads emulates the kernel's aio with
On Sat, May 29, 2010 at 04:42:59PM +0700, Antoine Martin wrote:
Can someone explain the aio options?
All I can find is this:
# qemu-system-x86_64 -h | grep -i aio
[,addr=A][,id=name][,aio=threads|native]
I assume it means the aio=threads emulates the kernel's aio with
separate
On Sat, May 29, 2010 at 10:55:18AM +0100, Stefan Hajnoczi wrote:
I would expect that aio=native is faster but benchmarks show that this
isn't true for all workloads.
In what benchmark do you see worse results for aio=native compared to
aio=threads?
--
To unsubscribe from this list: send the
On Sat, May 29, 2010 at 11:34 AM, Christoph Hellwig h...@infradead.org wrote:
In what benchmark do you see worse results for aio=native compared to
aio=threads?
Sequential reads using 4 concurrent dd if=/dev/vdb iflag=direct
of=/dev/null bs=8k processes. 2 vcpu guest with 4 GB RAM, virtio
On 05/23/2010 11:53 AM, Antoine Martin wrote:
I'm not: 64-bit host and 64-bit guest.
Just to be sure, I've tested that patch and still no joy:
/dev/vdc: read failed after 0 of 512 at 0: Input/output error
/dev/vdc: read failed after 0 of 512 at 0: Input/output error
/dev/vdc: read failed
On 05/23/2010 06:57 PM, Avi Kivity wrote:
On 05/23/2010 11:53 AM, Antoine Martin wrote:
I'm not: 64-bit host and 64-bit guest.
Just to be sure, I've tested that patch and still no joy:
/dev/vdc: read failed after 0 of 512 at 0: Input/output error
/dev/vdc: read failed after 0 of 512 at 0:
On 05/23/2010 05:07 PM, Antoine Martin wrote:
On 05/23/2010 06:57 PM, Avi Kivity wrote:
On 05/23/2010 11:53 AM, Antoine Martin wrote:
I'm not: 64-bit host and 64-bit guest.
Just to be sure, I've tested that patch and still no joy:
/dev/vdc: read failed after 0 of 512 at 0: Input/output error
On 05/23/2010 09:18 PM, Avi Kivity wrote:
On 05/23/2010 05:07 PM, Antoine Martin wrote:
On 05/23/2010 06:57 PM, Avi Kivity wrote:
On 05/23/2010 11:53 AM, Antoine Martin wrote:
I'm not: 64-bit host and 64-bit guest.
Just to be sure, I've tested that patch and still no joy:
/dev/vdc: read
How about if=ide?
Will test with another kernel and report back (this one doesn't have
any non-virtio drivers)
Can anyone tell me which kernel module I need for if=ide? Google was
no help here.
(before I include dozens of unnecessary modules in my slimmed down and
non modular kernel)
On 05/23/2010 05:53 PM, Antoine Martin wrote:
How about if=ide?
Will test with another kernel and report back (this one doesn't have
any non-virtio drivers)
Can anyone tell me which kernel module I need for if=ide? Google was
no help here.
(before I include dozens of unnecessary modules in
On 05/23/2010 09:43 PM, Antoine Martin wrote:
On 05/23/2010 09:18 PM, Avi Kivity wrote:
On 05/23/2010 05:07 PM, Antoine Martin wrote:
On 05/23/2010 06:57 PM, Avi Kivity wrote:
On 05/23/2010 11:53 AM, Antoine Martin wrote:
I'm not: 64-bit host and 64-bit guest.
Just to be sure, I've tested
On 05/23/2010 05:43 PM, Antoine Martin wrote:
Description of the problem:
A guest tries to mount a large raw partition (1.5TB /dev/sda9 in my
case), this fails with pread enabled, works with it disabled.
Did you mean: preadv?
Yes, here's what makes it work ok (as suggested by Christoph
On 05/23/2010 10:12 PM, Avi Kivity wrote:
On 05/23/2010 05:43 PM, Antoine Martin wrote:
Description of the problem:
A guest tries to mount a large raw partition (1.5TB /dev/sda9 in my
case), this fails with pread enabled, works with it disabled.
Did you mean: preadv?
Yes, here's what makes
On Sun, May 23, 2010 at 5:18 PM, Antoine Martin anto...@nagafix.co.uk wrote:
Why does it work in a chroot for the other options (aio=native, if=ide, etc)
but not for aio!=native??
Looks like I am misunderstanding the semantics of chroot...
It might not be the chroot() semantics but the
Bump.
Now that qemu is less likely to eat my data, *[Qemu-devel] [PATCH 4/8]
block: fix sector comparism in*
http://marc.info/?l=qemu-develm=127436114712437
I thought I would try using the raw 1.5TB partition again with KVM,
still no go.
I am still having to use:
#undef CONFIG_PREADV
22.05.2010 14:44, Antoine Martin wrote:
Bump.
Now that qemu is less likely to eat my data, *[Qemu-devel] [PATCH 4/8]
block: fix sector comparism in*
http://marc.info/?l=qemu-develm=127436114712437
I thought I would try using the raw 1.5TB partition again with KVM,
still no go.
Hm. I don't
On 05/22/2010 06:17 PM, Michael Tokarev wrote:
22.05.2010 14:44, Antoine Martin wrote:
Bump.
Now that qemu is less likely to eat my data, *[Qemu-devel] [PATCH 4/8]
block: fix sector comparism in*
http://marc.info/?l=qemu-develm=127436114712437
I thought I would try using the raw 1.5TB
Antoine Martin wrote:
On 03/08/2010 02:35 AM, Avi Kivity wrote:
On 03/07/2010 09:25 PM, Antoine Martin wrote:
On 03/08/2010 02:17 AM, Avi Kivity wrote:
On 03/07/2010 09:13 PM, Antoine Martin wrote:
What version of glibc do you have installed?
Latest stable:
sys-devel/gcc-4.3.4
On 03/08/2010 02:35 AM, Avi Kivity wrote:
On 03/07/2010 09:25 PM, Antoine Martin wrote:
On 03/08/2010 02:17 AM, Avi Kivity wrote:
On 03/07/2010 09:13 PM, Antoine Martin wrote:
What version of glibc do you have installed?
Latest stable:
sys-devel/gcc-4.3.4
sys-libs/glibc-2.10.1-r1
$ git
On 03/13/2010 11:51 AM, Antoine Martin wrote:
preadv/pwritev was actually introduced in 2.6.30. Perhaps you last
build glibc before that? If so, a rebuild may be all that's necessary.
To be certain, I've rebuilt qemu-kvm against:
linux-headers-2.6.33 + glibc-2.10.1-r1 (both freshly built)
On 03/07/2010 10:21 AM, Avi Kivity wrote:
On 03/07/2010 12:00 PM, Christoph Hellwig wrote:
I can only guess that the info collected so far is not sufficient to
understand what's going on: except of I/O error writing block NNN
we does not have anything at all. So it's impossible to know where
Antoine Martin wrote:
[]
https://sourceforge.net/tracker/?func=detailatid=893831aid=2933400group_id=180599
The initial report is almost 8 weeks old!
Is data-corruption and data loss somehow less important than the
hundreds of patches that have been submitted since?? Or is there a fix
On Sun, Mar 07, 2010 at 03:48:23AM +0700, Antoine Martin wrote:
Hi,
With qemu-kvm-0.12.3:
./qemu-system-x86_64 [..] -drive file=/dev/sdc9,if=virtio,cache=none [..]
[1.882843] vdc:
[2.365154] udev: starting version 146
[2.693768] end_request: I/O error, dev vdc, sector 126
[
On Sun, Mar 07, 2010 at 12:32:38PM +0300, Michael Tokarev wrote:
Antoine Martin wrote:
[]
https://sourceforge.net/tracker/?func=detailatid=893831aid=2933400group_id=180599
The initial report is almost 8 weeks old!
Is data-corruption and data loss somehow less important than the
On 03/07/2010 05:00 PM, Christoph Hellwig wrote:
On Sun, Mar 07, 2010 at 12:32:38PM +0300, Michael Tokarev wrote:
Antoine Martin wrote:
[]
https://sourceforge.net/tracker/?func=detailatid=893831aid=2933400group_id=180599
The initial report is almost 8 weeks old!
Is
[snip]
So there is something else at play. And just for the record:
1) kvm-88 works fine *with the exact same setup*
2) I've tried running as root
3) The raw disk mounts fine from the host.
So I *know* the problem is with kvm. I wouldn't post to the list
without triple checking that.
I have
Antoine Martin wrote:
[snip]
So there is something else at play. And just for the record:
1) kvm-88 works fine *with the exact same setup*
2) I've tried running as root
3) The raw disk mounts fine from the host.
So I *know* the problem is with kvm. I wouldn't post to the list
without
On 03/07/2010 12:00 PM, Christoph Hellwig wrote:
I can only guess that the info collected so far is not sufficient to
understand what's going on: except of I/O error writing block NNN
we does not have anything at all. So it's impossible to know where
the problem is.
Actually it is, and
On 03/07/2010 07:11 PM, Antoine Martin wrote:
the problem happens right at startup, it can't read _anything_
at all from the disk. In my case, the problem is intermittent
and happens under high load only, hence the big difference.
But anyway, this is something which should be easy to find
On Sun, Mar 07, 2010 at 07:30:06PM +0200, Avi Kivity wrote:
It may also be that glibc is emulating preadv, incorrectly.
I've done a quick audit of all pathes leading to pread and all seem
to align correctly. So either a broken glibc emulation or something
else outside the block layer seems
On 03/08/2010 12:30 AM, Avi Kivity wrote:
On 03/07/2010 07:21 PM, Christoph Hellwig wrote:
On Sun, Mar 07, 2010 at 07:18:40PM +0200, Avi Kivity wrote:
The only things that stands out is this before the read failed
message:
[pid 9098] lseek(12, 0, SEEK_END) = 1321851815424
[pid 9121]
On 03/08/2010 12:34 AM, Christoph Hellwig wrote:
On Sun, Mar 07, 2010 at 07:30:06PM +0200, Avi Kivity wrote:
It may also be that glibc is emulating preadv, incorrectly.
I've done a quick audit of all pathes leading to pread and all seem
to align correctly. So either a broken glibc emulation
On 03/07/2010 08:01 PM, Antoine Martin wrote:
On 03/08/2010 12:30 AM, Avi Kivity wrote:
On 03/07/2010 07:21 PM, Christoph Hellwig wrote:
On Sun, Mar 07, 2010 at 07:18:40PM +0200, Avi Kivity wrote:
The only things that stands out is this before the read failed
message:
[pid 9098] lseek(12,
On 03/07/2010 08:43 PM, Antoine Martin wrote:
Antoine, can you check this? ltrace may help, or run 'strings
libc.so |
grep pread'.
Or just add an
#undef CONFIG_PREADV
just before the first
#ifdef CONFIG_PREADV
in posix-aio-compat.c and see if that works.
It does indeed!
[snip]
The other interesting thing is that it's using pread - which means
it's a too old kernel to use preadv and thus a not very well tested
codepath with current qemu.
Too old?, I am confused: both host and guest kernels are 2.6.33!
I built KVM against the 2.6.30 headers though.
You need to
Avi Kivity wrote:
On 03/07/2010 08:01 PM, Antoine Martin wrote:
Yes cache=none: -drive file=./vm/var_fs,if=virtio,cache=none
Side question: this is the right thing to do for raw partitions, right?
The rightest.
Isn't cache=writeback now safe on virtio-blk since 2.6.32?
Doesn't it provide
On 03/07/2010 09:07 PM, Antoine Martin wrote:
Antoine, can you check this? ltrace may help,
This the only part that looked slightly relevant:
[pid 26883] __xstat64(1, ./vm/media_fs, 0x7fff92e40500) = 0
[pid 26883] malloc(48) = 0x2a38d60
[pid 26883] memset(0x2a38d60,
On 03/08/2010 02:09 AM, Asdo wrote:
Avi Kivity wrote:
On 03/07/2010 08:01 PM, Antoine Martin wrote:
Yes cache=none: -drive file=./vm/var_fs,if=virtio,cache=none
Side question: this is the right thing to do for raw partitions, right?
The rightest.
Isn't cache=writeback now safe on
On 03/07/2010 09:09 PM, Asdo wrote:
Avi Kivity wrote:
On 03/07/2010 08:01 PM, Antoine Martin wrote:
Yes cache=none: -drive file=./vm/var_fs,if=virtio,cache=none
Side question: this is the right thing to do for raw partitions, right?
The rightest.
Isn't cache=writeback now safe on
On 03/08/2010 02:10 AM, Avi Kivity wrote:
On 03/07/2010 09:07 PM, Antoine Martin wrote:
Antoine, can you check this? ltrace may help,
This the only part that looked slightly relevant:
[pid 26883] __xstat64(1, ./vm/media_fs, 0x7fff92e40500) = 0
[pid 26883] malloc(48)
On 03/08/2010 02:17 AM, Avi Kivity wrote:
On 03/07/2010 09:13 PM, Antoine Martin wrote:
What version of glibc do you have installed?
Latest stable:
sys-devel/gcc-4.3.4
sys-libs/glibc-2.10.1-r1
$ git show glibc-2.10~108 | head
commit e109c6124fe121618e42ba882e2a0af6e97b8efc
Author: Ulrich
On 03/07/2010 09:25 PM, Antoine Martin wrote:
On 03/08/2010 02:17 AM, Avi Kivity wrote:
On 03/07/2010 09:13 PM, Antoine Martin wrote:
What version of glibc do you have installed?
Latest stable:
sys-devel/gcc-4.3.4
sys-libs/glibc-2.10.1-r1
$ git show glibc-2.10~108 | head
commit
Hi,
With qemu-kvm-0.12.3:
./qemu-system-x86_64 [..] -drive file=/dev/sdc9,if=virtio,cache=none [..]
[1.882843] vdc:
[2.365154] udev: starting version 146
[2.693768] end_request: I/O error, dev vdc, sector 126
[2.693772] Buffer I/O error on device vdc, logical block 126
[
Antoine Martin wrote:
Hi,
With qemu-kvm-0.12.3:
./qemu-system-x86_64 [..] -drive file=/dev/sdc9,if=virtio,cache=none [..]
[1.882843] vdc:
[2.365154] udev: starting version 146
[2.693768] end_request: I/O error, dev vdc, sector 126
[2.693772] Buffer I/O error on device
On 03/07/2010 04:28 AM, Michael Tokarev wrote:
Antoine Martin wrote:
Hi,
With qemu-kvm-0.12.3:
./qemu-system-x86_64 [..] -drive file=/dev/sdc9,if=virtio,cache=none [..]
[1.882843] vdc:
[2.365154] udev: starting version 146
[2.693768] end_request: I/O error, dev vdc, sector 126
46 matches
Mail list logo