[Expired for udev (Ubuntu) because there has been no activity for 60
days.]
** Changed in: udev (Ubuntu)
Status: Incomplete => Expired
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/893450
Titl
[Expired for libvirt (Ubuntu) because there has been no activity for 60
days.]
** Changed in: libvirt (Ubuntu)
Status: Incomplete => Expired
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/893450
The disk sector size and how it's managed really determine how well it will
perform under various access sizes. In your case remember, the disk sector
size is 4k, so we would hope that Linux is "doing the right thing" in coalescing
these 1k accesses into chunks which get the most out of your hard d
All the tests where ran at the bare metal level.
The results on comment 16 where ran over a ext4 over lvm over raid1.
The results on comment 17 where ran over a ext4 over a plain partition.
I don't know how bad are those numbers... Maybe it's normal that working
with 1k blocks kill's the disk perf
Are these ext4 test results done at the bare metal level or from the guest?
With the
guests themselves, are they using the raw volume for the image or do they
exist as an
image file on the filesystem. If they're images files, then what fs type do
they live on?
Thanks.
--
You received this bug
The same tests, now on a ext4 filesystem over plain partition (no RAID)
- Record size 1k
Children see throughput for 2 initial writers = 34.09 KB/sec
Avg throughput per process = 17.04 KB/sec
Children see throughput for 2 rewriters
And some iozone tests, with 2 child process, O_DIRECT and SYNC Mode
(full log on pastebin: http://pastebin.com/amTbjfdr).
All with:
O_DIRECT feature enabled
SYNC Mode.
Output is in Kbytes/sec
Min process = 2
Max process = 2
Throughput test with 2
Some comments about performance:
- reading works well with big and small block sizes
- writing with big block size works ok
- the problem arise when writing with small block size
--
Reading works ok with small and large block size:
reading blocks of 1KiB from sda2
# /usr/bin/time dd if=
I've made more test with the guidance of Peter Petrakis, I've physically
removed the LSI card and tryied with a newer kernel (3.0.0-15) without
luck... the performance is the same (read ok, but slow writes).
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
Worked with the user on irc and ruled out screaming interrupts
and the LSI card itself (he removed it). User is upgrading to 3.x
kernel and will apprise us of his progress here.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://b
Ok, I've added seagate disks, manually created partitions (aligned to
1mb), added the new discs to the raid, remove the wd discs. The raid is
now over two seagate disks, and the performance is still crap.
# hdparm -i /dev/sda
Model=ST2000DL003-9VT166, FwRev=CC32, SerialNo=6YD0RYED
Config={ HardS
** Changed in: libvirt (Ubuntu)
Status: New => Incomplete
** Changed in: udev (Ubuntu)
Status: New => Incomplete
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/893450
Title:
libvirt fa
In re-examining the evidence and chatting with Serge a few things come
to mind.
1) There's a disparity between the conclusion formed in comment #3 and the
latest
udev logs. It could easily be true that the udev logging simply was held long
enough
for that last LV to be activated. It appears to b
Sure enough, lv_robot-pv0 (the backing lv for the vm) doesn't show up
either in the lvscan or udev debug output.
Certainly a lvm failure here.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/893450
Tit
** Changed in: libvirt (Ubuntu)
Status: Incomplete => New
** Changed in: udev (Ubuntu)
Status: Incomplete => New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/893450
Title:
libvirt fa
I've attached /dev/.initramfs/udev.out. I don'n know if this is
relevant, but the write performance on the disks is horrible, I think
this could be caused by mis-aligment of partitions ("dd if=/dev/zero
of=zeros bs=64b oflag=dsync" gives me around 400kB/s)
** Attachment added: "Contents of /dev/.i
This is the output of the lvm scan commands. I'll ask permission to
reboot the server tonight to post the debug output of udev.
web4:~# sudo pvscan
PV /dev/md1 VG vg_default lvm2 [1.81 TiB / 1.33 TiB free]
Total: 1 [1.81 TiB] / in use: 1 [1.81 TiB] / in no VG: 0 [0 ]
web4:~# sudo vgscan
Please don't change the state from incomplete until the requested
information has been provided.
** Changed in: udev (Ubuntu)
Status: Confirmed => Incomplete
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpa
** Changed in: udev (Ubuntu)
Status: Incomplete => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/893450
Title:
libvirt fails to start correctly because LVM is not ready
To manage n
Hi,
to find out more, we need the udev debug output. Could you please edit
/usr/share/initramfs-tools/scripts/init-top/udev so that the line
starting udevd reads:
/sbin/udevd --daemon --debug --resolve-names=never >
/dev/.initramfs/udev.out 2>&1
then run
sudo update-initramfs -k all -u
Afte
** Summary changed:
- KVM guest fails to autostart sometimes with
virSecurityDACRestoreSecurityFileLabel error
+ libvirt fails to start correctly because LVM is not ready
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.l
21 matches
Mail list logo