Re: [ovirt-users] Moving thin provisioned disks question

2017-06-29 Thread Fred Rolland
It implies only debu for qemu-img operations.

To make the log configuration persistent, you will need to edit the file
/etc/vdsm/logger.conf :

1 . Insert qemu at the end of the list in the loggers entry
[loggers]
keys=root,vds,storage,virt,ovirt_hosted_engine_ha,ovirt_hosted_engine_ha_config,IOProcess,devel,qemu

2. Add this entry:

[logger_qemu]
level=DEBUG
handlers=logfile
qualname=QemuImg
propagate=0

3. Restart Vdsm

Regarding the progress bar, it was introduced in 4.1.
Can you open a bug and provide logs so that we can investigate why it does
not work for you.

On Wed, Jun 28, 2017 at 5:23 PM, Gianluca Cecchi 
wrote:

> On Wed, Jun 28, 2017 at 1:14 PM, Fred Rolland  wrote:
>
>>
>> Hi,
>>
>> Yes, the /etc/vdsm/logger.conf contains log level settings for most of
>> the operations, but not all.
>>
>> You can use vdsm-client to enable logging :
>>
>> vdsm-client  Host setLogLevel level=DEBUG name=QemuImg
>>
>>
>>
> Thanks.
> The vdsm-client commnad was not installed by default during my host setup
> (plain CentOS 7.3 server); not in default packages installed when deploying
> host from web ui.
> So I manually installed it (at the version matching the current vdsm one):
> yum install vdsm-client-4.19.10.1-1.el7.centos.x86_64
>
> [root@ov300 vdsm]# vdsm-client Host setLogLevel level=DEBUG name=QemuImg
> true
> [root@ov300 vdsm]#
>
> And then I see in vdsm.log
>
> 2017-06-28 16:20:08,396+0200 DEBUG (tasks/5) [QemuImg] qemu-img operation
> progress: 3.0% (qemuimg:330)
>
> How to have it persistent across reboots? Is it safe and should imply only
> debug log for qemu-img commands?
>
>
>
>> Regarding the UI, see attached a screenshot. Under the "Locked" status
>> there is a progress bar.
>>
>
> It is not my case.
> See this screenshot while qemu-imh convert is already in place, moving a
> 900Gb disk from iSCSI SD to iSCSI SD:
> https://drive.google.com/file/d/0BwoPbcrMv8mvcmpOVWZFM09RM00/
> view?usp=sharing
>
> My engine is 4.1.1.8-1.el7.centos on CentOS 7.3 and my host has
> qemu-kvm-ev-2.6.0-28.el7_3.6.1.x86_64 and vdsm-4.19.10.1-1.el7.centos.
> x86_64
> In which version was it introduced?
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Moving thin provisioned disks question

2017-06-28 Thread Gianluca Cecchi
On Wed, Jun 28, 2017 at 1:14 PM, Fred Rolland  wrote:

>
> Hi,
>
> Yes, the /etc/vdsm/logger.conf contains log level settings for most of the
> operations, but not all.
>
> You can use vdsm-client to enable logging :
>
> vdsm-client  Host setLogLevel level=DEBUG name=QemuImg
>
>
>
Thanks.
The vdsm-client commnad was not installed by default during my host setup
(plain CentOS 7.3 server); not in default packages installed when deploying
host from web ui.
So I manually installed it (at the version matching the current vdsm one):
yum install vdsm-client-4.19.10.1-1.el7.centos.x86_64

[root@ov300 vdsm]# vdsm-client Host setLogLevel level=DEBUG name=QemuImg
true
[root@ov300 vdsm]#

And then I see in vdsm.log

2017-06-28 16:20:08,396+0200 DEBUG (tasks/5) [QemuImg] qemu-img operation
progress: 3.0% (qemuimg:330)

How to have it persistent across reboots? Is it safe and should imply only
debug log for qemu-img commands?



> Regarding the UI, see attached a screenshot. Under the "Locked" status
> there is a progress bar.
>

It is not my case.
See this screenshot while qemu-imh convert is already in place, moving a
900Gb disk from iSCSI SD to iSCSI SD:
https://drive.google.com/file/d/0BwoPbcrMv8mvcmpOVWZFM09RM00/view?usp=sharing

My engine is 4.1.1.8-1.el7.centos on CentOS 7.3 and my host has
qemu-kvm-ev-2.6.0-28.el7_3.6.1.x86_64 and vdsm-4.19.10.1-1.el7.centos.x86_64
In which version was it introduced?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Moving thin provisioned disks question

2017-06-28 Thread Gianluca Cecchi
On Wed, Jun 28, 2017 at 8:48 AM, Fred Rolland  wrote:

> Hi,
>
> The qemu operation progress is parsed [1] and is printed on debug level in
> Vdsm log [2].
>

Where should I see this information: vdsm.log ?

It seems I see nothing. Could it depend on settings in
 /etc/vdsm/logger.conf ?
Which part of this file in case would rule the presence of the log messages
related to move progress?

In the UI, a progress indication of the operation is available on the disks
> status for the "Move Disk" operation.
> Currently only for this operation.
>
>
I have not understood.
In all parts in te gui where there is a disk section  only see "Locked" in
status

eg

Disks -> the line with the moving disk presents "Locked"
Virtual Machines -> select my vm -> Disks -> select the moving disk, it
contains "Locked"
Virtual Machines -> select my vm -> Snapshots -> Disks, it presents "Locked"

Version is 4.1.1

Thanks


Gianluca

>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Moving thin provisioned disks question

2017-06-28 Thread Fred Rolland
Hi,

The qemu operation progress is parsed [1] and is printed on debug level in
Vdsm log [2].
In the UI, a progress indication of the operation is available on the disks
status for the "Move Disk" operation.
Currently only for this operation.

Regards,

Fred


[1]
https://github.com/oVirt/vdsm/blob/917905d85abb1a24d8beea09f012b81ed594b349/lib/vdsm/qemuimg.py#L280
[2]
https://github.com/oVirt/vdsm/blob/917905d85abb1a24d8beea09f012b81ed594b349/lib/vdsm/qemuimg.py#L332

On Tue, Jun 27, 2017 at 6:50 PM, Gianluca Cecchi 
wrote:

> On Tue, Jun 27, 2017 at 3:09 PM, InterNetX - Juergen Gotteswinter <
> j...@internetx.com> wrote:
>
>>
>> > >
>> > > Suppose I have one 500Gb thin provisioned disk
>> > > Why can I indirectly see that the actual size is 300Gb only in
>> Snapshots
>> > > tab --> Disks of its VM ?
>> >
>> > if you are using live storage migration, ovirt creates a qcow/lvm
>> > snapshot of the vm block device. but for whatever reason, it does
>> NOT
>> > remove the snapshot after the migration has finished. you have to
>> remove
>> > it yourself, otherwise disk usage will grow more and more.
>> >
>> >
>> > I believe you are referring to the "Auto-generated" snapshot created
>> > during live storage migration. This behavior is reported
>> > in https://bugzilla.redhat.com/1317434 and fixed since 4.0.0.
>>
>> yep, thats what i meant. i just wasnt aware of the fact that this isnt
>> the case anymore for 4.x and above. sorry for confusion
>>
>>
> I confirm that the snapshot of the VM, named "Auto-generated for Live
> Storage Migration" has been removed after the disk moving completion.
> Also, for a preallocated disk the "qemu-img convert" has format options
> raw/raw:
>
> [root@ov300 ~]# ps -ef|grep qemu-img
> vdsm 18343  3585  1 12:07 ?00:00:04 /usr/bin/qemu-img convert
> -p -t none -T none -f raw /rhev/data-center/mnt/blockSD/
> 5ed04196-87f1-480e-9fee-9dd450a3b53b/images/303287ad-
> b7ee-40b4-b303-108a5b07c54d/fd408b9c-fdd5-4f72-a73c-332f47868b3c -O raw
> /rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-
> ede649542c21/images/303287ad-b7ee-40b4-b303-108a5b07c54d/
> fd408b9c-fdd5-4f72-a73c-332f47868b3c
>
>
> for a thin provisioned disk instead it is of the form qcow2/qcow2
> [root@ov300 ~]# ps -ef|grep qemu-img
> vdsm 28545  3585  3 12:49 ?00:00:01 /usr/bin/qemu-img convert
> -p -t none -T none -f qcow2 /rhev/data-center/mnt/blockSD/
> 5ed04196-87f1-480e-9fee-9dd450a3b53b/images/9302dca6-
> 285e-49f7-a64c-68c5c95bdf91/0f3f927d-bb42-479f-ba86-cbd7d4c0fb51 -O qcow2
> -o compat=1.1 /rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-
> ede649542c21/images/9302dca6-285e-49f7-a64c-68c5c95bdf91/
> 0f3f927d-bb42-479f-ba86-cbd7d4c0fb51
>
>
> BTW: the "-p" option should give
>
>-p  display progress bar (compare, convert and rebase commands
> only).  If the -p option is not used for a
>command that supports it, the progress is reported when the
> process receives a "SIGUSR1" signal.
>
> Is it of any meaning? I don't see any progress bar/information inside the
> gui? Is it perhaps in any other file on filesystem?
>
> Thanks,
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Moving thin provisioned disks question

2017-06-27 Thread Gianluca Cecchi
On Tue, Jun 27, 2017 at 3:09 PM, InterNetX - Juergen Gotteswinter <
j...@internetx.com> wrote:

>
> > >
> > > Suppose I have one 500Gb thin provisioned disk
> > > Why can I indirectly see that the actual size is 300Gb only in
> Snapshots
> > > tab --> Disks of its VM ?
> >
> > if you are using live storage migration, ovirt creates a qcow/lvm
> > snapshot of the vm block device. but for whatever reason, it does NOT
> > remove the snapshot after the migration has finished. you have to
> remove
> > it yourself, otherwise disk usage will grow more and more.
> >
> >
> > I believe you are referring to the "Auto-generated" snapshot created
> > during live storage migration. This behavior is reported
> > in https://bugzilla.redhat.com/1317434 and fixed since 4.0.0.
>
> yep, thats what i meant. i just wasnt aware of the fact that this isnt
> the case anymore for 4.x and above. sorry for confusion
>
>
I confirm that the snapshot of the VM, named "Auto-generated for Live
Storage Migration" has been removed after the disk moving completion.
Also, for a preallocated disk the "qemu-img convert" has format options
raw/raw:

[root@ov300 ~]# ps -ef|grep qemu-img
vdsm 18343  3585  1 12:07 ?00:00:04 /usr/bin/qemu-img convert
-p -t none -T none -f raw
/rhev/data-center/mnt/blockSD/5ed04196-87f1-480e-9fee-9dd450a3b53b/images/303287ad-b7ee-40b4-b303-108a5b07c54d/fd408b9c-fdd5-4f72-a73c-332f47868b3c
-O raw
/rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/303287ad-b7ee-40b4-b303-108a5b07c54d/fd408b9c-fdd5-4f72-a73c-332f47868b3c


for a thin provisioned disk instead it is of the form qcow2/qcow2
[root@ov300 ~]# ps -ef|grep qemu-img
vdsm 28545  3585  3 12:49 ?00:00:01 /usr/bin/qemu-img convert
-p -t none -T none -f qcow2
/rhev/data-center/mnt/blockSD/5ed04196-87f1-480e-9fee-9dd450a3b53b/images/9302dca6-285e-49f7-a64c-68c5c95bdf91/0f3f927d-bb42-479f-ba86-cbd7d4c0fb51
-O qcow2 -o compat=1.1
/rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/9302dca6-285e-49f7-a64c-68c5c95bdf91/0f3f927d-bb42-479f-ba86-cbd7d4c0fb51


BTW: the "-p" option should give

   -p  display progress bar (compare, convert and rebase commands
only).  If the -p option is not used for a
   command that supports it, the progress is reported when the
process receives a "SIGUSR1" signal.

Is it of any meaning? I don't see any progress bar/information inside the
gui? Is it perhaps in any other file on filesystem?

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Moving thin provisioned disks question

2017-06-27 Thread InterNetX - Juergen Gotteswinter

> >
> > Suppose I have one 500Gb thin provisioned disk
> > Why can I indirectly see that the actual size is 300Gb only in Snapshots
> > tab --> Disks of its VM ?
> 
> if you are using live storage migration, ovirt creates a qcow/lvm
> snapshot of the vm block device. but for whatever reason, it does NOT
> remove the snapshot after the migration has finished. you have to remove
> it yourself, otherwise disk usage will grow more and more.
> 
> 
> I believe you are referring to the "Auto-generated" snapshot created
> during live storage migration. This behavior is reported
> in https://bugzilla.redhat.com/1317434 and fixed since 4.0.0.

yep, thats what i meant. i just wasnt aware of the fact that this isnt
the case anymore for 4.x and above. sorry for confusion

> 
> 
> >
> > Thanks,
> > Gianluca
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org 
> > http://lists.ovirt.org/mailman/listinfo/users
> 
> >
> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Moving thin provisioned disks question

2017-06-27 Thread Ala Hino
On Tue, Jun 27, 2017 at 1:37 PM, InterNetX - Juergen Gotteswinter <
j...@internetx.com> wrote:

>
>
> Am 27.06.2017 um 11:27 schrieb Gianluca Cecchi:
> > Hello,
> > I have a storage domain that I have to empty, moving its disks to
> > another storage domain,
> >
> > Both source and target domains are iSCSI
> > What is the behavior in case of preallocated and thin provisioned disk?
> > Are they preserved with their initial configuration?
>
> yes, they stay within their initial configuration
>
> >
> > Suppose I have one 500Gb thin provisioned disk
> > Why can I indirectly see that the actual size is 300Gb only in Snapshots
> > tab --> Disks of its VM ?
>
> if you are using live storage migration, ovirt creates a qcow/lvm
> snapshot of the vm block device. but for whatever reason, it does NOT
> remove the snapshot after the migration has finished. you have to remove
> it yourself, otherwise disk usage will grow more and more.
>

I believe you are referring to the "Auto-generated" snapshot created during
live storage migration. This behavior is reported in
https://bugzilla.redhat.com/1317434 and fixed since 4.0.0.

>
> >
> > Thanks,
> > Gianluca
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Moving thin provisioned disks question

2017-06-27 Thread Gianluca Cecchi
On Tue, Jun 27, 2017 at 12:37 PM, InterNetX - Juergen Gotteswinter <
j...@internetx.com> wrote:

>
>
> Am 27.06.2017 um 11:27 schrieb Gianluca Cecchi:
> > Hello,
> > I have a storage domain that I have to empty, moving its disks to
> > another storage domain,
> >
> > Both source and target domains are iSCSI
> > What is the behavior in case of preallocated and thin provisioned disk?
> > Are they preserved with their initial configuration?
>
> yes, they stay within their initial configuration
>

Thanks. I'll try and report in case of problems


>
> >
> > Suppose I have one 500Gb thin provisioned disk
> > Why can I indirectly see that the actual size is 300Gb only in Snapshots
> > tab --> Disks of its VM ?
>
> if you are using live storage migration, ovirt creates a qcow/lvm
> snapshot of the vm block device. but for whatever reason, it does NOT
> remove the snapshot after the migration has finished. you have to remove
> it yourself, otherwise disk usage will grow more and more.
>
>
Do you mean the snapshot of the original one? In my case I'm moving from
storage domain to storage domain and I would expect nothing remaining at
source storage...
How can I verify the snapshot still there?
Is there any bugzilla entry tracking this? It would be strange if not...

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Moving thin provisioned disks question

2017-06-27 Thread InterNetX - Juergen Gotteswinter


Am 27.06.2017 um 11:27 schrieb Gianluca Cecchi:
> Hello,
> I have a storage domain that I have to empty, moving its disks to
> another storage domain,
> 
> Both source and target domains are iSCSI
> What is the behavior in case of preallocated and thin provisioned disk?
> Are they preserved with their initial configuration?

yes, they stay within their initial configuration

> 
> Suppose I have one 500Gb thin provisioned disk
> Why can I indirectly see that the actual size is 300Gb only in Snapshots
> tab --> Disks of its VM ?

if you are using live storage migration, ovirt creates a qcow/lvm
snapshot of the vm block device. but for whatever reason, it does NOT
remove the snapshot after the migration has finished. you have to remove
it yourself, otherwise disk usage will grow more and more.

> 
> Thanks,
> Gianluca
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Moving thin provisioned disks question

2017-06-27 Thread Gianluca Cecchi
Hello,
I have a storage domain that I have to empty, moving its disks to another
storage domain,

Both source and target domains are iSCSI
What is the behavior in case of preallocated and thin provisioned disk? Are
they preserved with their initial configuration?

Suppose I have one 500Gb thin provisioned disk
Why can I indirectly see that the actual size is 300Gb only in Snapshots
tab --> Disks of its VM ?

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users