On Thu, Feb 2, 2017 at 12:04 PM, Gianluca Cecchi <gianluca.cec...@gmail.com>
wrote:

>
>
> On Thu, Feb 2, 2017 at 10:48 AM, Nir Soffer <nsof...@redhat.com> wrote:
>
>> On Thu, Feb 2, 2017 at 1:11 AM, Gianluca Cecchi
>> <gianluca.cec...@gmail.com> wrote:
>> > On Wed, Feb 1, 2017 at 8:22 PM, Gianluca Cecchi <
>> gianluca.cec...@gmail.com>
>> > wrote:
>> >>
>> >>
>> >> OK. In the mean time I have applied your suggested config and restarted
>> >> the 2 nodes.
>> >> Let we test and see if I find any problems running also some I/O tests.
>> >> Thanks in the mean time,
>> >> Gianluca
>> >
>> >
>> >
>> > Quick test without much success
>> >
>> > Inside the guest I run this loop
>> > while true
>> > do
>> > time dd if=/dev/zero bs=1024k count=1024 of=/home/g.cecchi/testfile
>>
>
A single 'dd' rarely saturates a high performance storage.
There are better utilities to test ('fio' , 'vdbench' and 'ddpt' for
example).
It's also testing a very theoretical scenario - very rarely you write zeros
and very rarely you write so much sequential IO, and with a fixed block
size. So it's almost 'hero numbers'.

> sleep 5
>> > done
>>
>> I don't think this test is related to the issues you reported earlier.
>>
>>
> I thought the same too, and all related comments you wrote.
> I'm going to test the suggested modifications for chunks.
> In general do you recommend thin provisioning at all on SAN storage?
>

Depends on your SAN. On thin provisioned one (with potentially inline dedup
and compression, such as XtremIO, Pure, Nimble and others) I don't see a
great value in thin provisioning.


>
> I decided to switch to preallocated for further tests and confirm
> So I created a snapshot and then a clone of the VM, changing allocation
> policy of the disk to preallocated.
> So far so good.
>
> Feb 2, 2017 10:40:23 AM VM ol65preallocated creation has been completed.
> Feb 2, 2017 10:24:15 AM VM ol65preallocated creation was initiated by
> admin@internal-authz.
> Feb 2, 2017 10:22:31 AM Snapshot 'for cloning' creation for VM 'ol65' has
> been completed.
> Feb 2, 2017 10:22:31 AM Snapshot 'for cloning' creation for VM 'ol65' was
> initiated by admin@internal-authz.
>
> so the throughput seems ok based on this storage type (the LUNs are on
> RAID5 made with sata disks): 16 minutes to write 90Gb is about 96MBytes/s,
> what expected
>

What is your expectation? Is it FC, iSCSI? How many paths? What is the IO
scheduler in the VM? Is it using virtio-blk or virtio-SCSI?
Y.



>
> What I see in messages during the cloning phasefrom 10:24 to 10:40:
>
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:14 ovmsrv05 journal: vdsm root WARNING File:
> /rhev/data-center/588237b8-0031-02f6-035d-000000000136/
> 922b5269-ab56-4c4d-838f-49d33427e2ab/images/9d1c977f-
> 540d-436a-9d93-b1cb0816af2a/607dbf59-7d4d-4fc3-ae5f-e8824bf82648 already
> removed
> Feb  2 10:24:14 ovmsrv05 multipathd: dm-15: remove map (uevent)
> Feb  2 10:24:14 ovmsrv05 multipathd: dm-15: devmap not registered, can't
> remove
> Feb  2 10:24:14 ovmsrv05 multipathd: dm-15: remove map (uevent)
> Feb  2 10:24:17 ovmsrv05 kernel: blk_update_request: critical target
> error, dev dm-4, sector 44566529
> Feb  2 10:24:17 ovmsrv05 kernel: dm-15: WRITE SAME failed. Manually
> zeroing.
> Feb  2 10:40:07 ovmsrv05 kernel: scsi_verify_blk_ioctl: 16 callbacks
> suppressed
> Feb  2 10:40:07 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:40:17 ovmsrv05 multipathd: dm-15: remove map (uevent)
> Feb  2 10:40:17 ovmsrv05 multipathd: dm-15: devmap not registered, can't
> remove
> Feb  2 10:40:17 ovmsrv05 multipathd: dm-15: remove map (uevent)
> Feb  2 10:40:22 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
>
>
>
>
>> > After about 7 rounds I get this in messages of the host where the VM is
>> > running:
>> >
>> > Feb  1 23:31:39 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
>> partition!
>> > Feb  1 23:31:39 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
>> partition!
>> > Feb  1 23:31:44 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
>> partition!
>> > Feb  1 23:31:45 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
>> partition!
>> > Feb  1 23:31:45 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
>> partition!
>> > Feb  1 23:31:47 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
>> partition!
>> > Feb  1 23:31:50 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
>> partition!
>> > Feb  1 23:31:50 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
>> partition!
>> > Feb  1 23:31:56 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
>> partition!
>> > Feb  1 23:31:57 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
>> partition!
>>
>> This is interesting, we have seen this messages before, but could never
>> detect the flow causing them, are you sure you see this each time you
>> extend
>> your disk?
>>
>> If you can reproduce this, please file a bug.
>>
>>
> Ok, see also above the registered message during the clone phase.
> Gianluca
>
> _______________________________________________
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to