[ovirt-users] Fwd: Re: HA agent fails to start

2016-04-14 Thread Richard Neuboeck
On 04/14/2016 11:03 PM, Simone Tiraboschi wrote:
> On Thu, Apr 14, 2016 at 10:38 PM, Simone Tiraboschi  
> wrote:
>> On Thu, Apr 14, 2016 at 6:53 PM, Richard Neuboeck  
>> wrote:
>>> On 14.04.16 18:46, Simone Tiraboschi wrote:
 On Thu, Apr 14, 2016 at 4:04 PM, Richard Neuboeck  
 wrote:
> On 04/14/2016 02:14 PM, Simone Tiraboschi wrote:
>> On Thu, Apr 14, 2016 at 12:51 PM, Richard Neuboeck
>>  wrote:
>>> On 04/13/2016 10:00 AM, Simone Tiraboschi wrote:
 On Wed, Apr 13, 2016 at 9:38 AM, Richard Neuboeck 
  wrote:
> The answers file shows the setup time of both machines.
>
> On both machines hosted-engine.conf got rotated right before I wrote
> this mail. Is it possible that I managed to interrupt the rotation 
> with
> the reboot so the backup was accurate but the update not yet written 
> to
> hosted-engine.conf?

 AFAIK we don't have any rotation mechanism for that file; something
 else you have in place on that host?
>>>
>>> Those machines are all CentOS 7.2 minimal installs. The only
>>> adaptation I do is installing vim, removing postfix and installing
>>> exim, removing firewalld and installing iptables-service. Then I add
>>> the oVirt repos (3.6 and 3.6-snapshot) and deploy the host.
>>>
>>> But checking lsof shows that 'ovirt-ha-agent --no-daemon' has access
>>> to the config file (and the one ending with ~):
>>>
>>> # lsof | grep 'hosted-engine.conf~'
>>> ovirt-ha- 193446   vdsm  351u  REG
>>> 253,01021135070683
>>> /etc/ovirt-hosted-engine/hosted-engine.conf~
>>
>> This is not that much relevant if the file was renamed after
>> ovirt-ha-agent opened it.
>> Try this:
>>
>> [root@c72he20160405h1 ovirt-hosted-engine-setup]# tail -n1 -f
>> /etc/ovirt-hosted-engine/hosted-engine.conf &
>> [1] 28866
>> [root@c72he20160405h1 ovirt-hosted-engine-setup]# port=
>>
>> [root@c72he20160405h1 ovirt-hosted-engine-setup]# lsof | grep 
>> hosted-engine.conf
>> tail  28866  root3r  REG
>> 253,0  10141595898 /etc/ovirt-hosted-engine/hosted-engine.conf
>> [root@c72he20160405h1 ovirt-hosted-engine-setup]# mv
>> /etc/ovirt-hosted-engine/hosted-engine.conf
>> /etc/ovirt-hosted-engine/hosted-engine.conf_123
>> [root@c72he20160405h1 ovirt-hosted-engine-setup]# lsof | grep 
>> hosted-engine.conf
>> tail  28866  root3r  REG
>> 253,0  10141595898
>> /etc/ovirt-hosted-engine/hosted-engine.conf_123
>> [root@c72he20160405h1 ovirt-hosted-engine-setup]#
>>
>
> I've issued the commands you suggested but I don't know how that
> helps to find the process accessing the config files.
>
> After moving the hosted-engine.conf file the HA agent crashed
> logging the information that the config file is not available.
>
> Here is the output from every command:
>
> # tail -n1 -f /etc/ovirt-hosted-engine/hosted-engine.conf &
> [1] 167865
> [root@cube-two ~]# port=
> # lsof | grep hosted-engine.conf
> ovirt-ha- 166609   vdsm5u  REG
> 253,01021134433491
> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
> ovirt-ha- 166609   vdsm7u  REG
> 253,01021134433453
> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
> ovirt-ha- 166609   vdsm8u  REG
> 253,01021134433489
> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
> ovirt-ha- 166609   vdsm9u  REG
> 253,01021134433493
> /etc/ovirt-hosted-engine/hosted-engine.conf~
> ovirt-ha- 166609   vdsm   10u  REG
> 253,01021134433495
> /etc/ovirt-hosted-engine/hosted-engine.conf
> tail  167865   root3r  REG
> 253,01021134433493
> /etc/ovirt-hosted-engine/hosted-engine.conf~
> # mv /etc/ovirt-hosted-engine/hosted-engine.conf
> /etc/ovirt-hosted-engine/hosted-engine.conf_123
> # lsof | grep hosted-engine.conf
> ovirt-ha- 166609   vdsm5u  REG
> 253,01021134433491
> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
> ovirt-ha- 166609   vdsm7u  REG
> 253,01021134433453
> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
> ovirt-ha- 166609   vdsm8u  REG
> 253,01021134433489
> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
> ovirt-ha- 166609   vdsm9u  REG
> 253,01021134433493
> /etc/ovirt-hosted-e

Re: [ovirt-users] Hosted engine on gluster problem

2016-04-14 Thread Sandro Bonazzola
On Thu, Apr 14, 2016 at 7:35 PM, Nir Soffer  wrote:

> On Wed, Apr 13, 2016 at 4:34 PM, Luiz Claudio Prazeres Goncalves
>  wrote:
> > Nir, here is the problem:
> > https://bugzilla.redhat.com/show_bug.cgi?id=1298693
> >
> > When you do a hosted-engine --deploy and pick "glusterfs" you don't have
> a
> > way to define the mount options, therefore, the use of the
> > "backupvol-server", however when you create a storage domain from the UI
> you
> > can, like the attached screen shot.
> >
> >
> > In the hosted-engine --deploy, I would expect a flow which includes not
> only
> > the "gluster" entrypoint, but also the gluster mount options which is
> > missing today. This option would be optional, but would remove the single
> > point of failure described on the Bug 1298693.
> >
> > for example:
> >
> > Existing entry point on the "hosted-engine --deploy" flow
> > gluster1.xyz.com:/engine
>
> I agree, this feature must be supported.
>

It will, and it's currently targeted to 4.0.



>
> > Missing option on the "hosted-engine --deploy" flow :
> > backupvolfile-server=gluster2.xyz.com
> ,fetch-attempts=3,log-level=WARNING,log-file=/var/log/glusterfs/gluster_engine_domain.log
> >
> > Sandro, it seems to me a simple solution which can be easily fixed.
> >
> > What do you think?
> >
> > Regards
> > -Luiz
> >
> >
> >
> > 2016-04-13 4:15 GMT-03:00 Sandro Bonazzola :
> >>
> >>
> >>
> >> On Tue, Apr 12, 2016 at 6:47 PM, Nir Soffer  wrote:
> >>>
> >>> On Tue, Apr 12, 2016 at 3:05 PM, Luiz Claudio Prazeres Goncalves
> >>>  wrote:
> >>> > Hi Sandro, I've been using gluster with 3 external hosts for a while
> >>> > and
> >>> > things are working pretty well, however this single point of failure
> >>> > looks
> >>> > like a simple feature to implement,but critical to anyone who wants
> to
> >>> > use
> >>> > gluster on production  . This is not hyperconvergency which has other
> >>> > issues/implications. So , why not have this feature out on 3.6
> branch?
> >>> > It
> >>> > looks like just let vdsm use the 'backupvol-server' option when
> >>> > mounting the
> >>> > engine domain and make the property tests.
> >>>
> >>> Can you explain what is the problem, and what is the suggested
> solution?
> >>>
> >>> Engine and vdsm already support the backupvol-server option - you can
> >>> define this option in the storage domain options when you create a
> >>> gluster
> >>> storage domain. With this option vdsm should be able to connect to
> >>> gluster
> >>> storage domain even if a brick is down.
> >>>
> >>> If you don't have this option in engine , you probably cannot add it
> with
> >>> hosted
> >>> engine setup, since for editing it you must put the storage domain in
> >>> maintenance
> >>> and if you do this the engine vm will be killed :-) This is is one of
> >>> the issues with
> >>> engine managing the storage domain it runs on.
> >>>
> >>> I think the best way to avoid this issue, is to add a DNS entry
> >>> providing the addresses
> >>> of all the gluster bricks, and use this address for the gluster
> >>> storage domain. This way
> >>> the glusterfs mount helper can mount the domain even if one of the
> >>> gluster bricks
> >>> are down.
> >>>
> >>> Again, we will need some magic from the hosted engine developers to
> >>> modify the
> >>> address of the hosted engine gluster domain on existing system.
> >>
> >>
> >> Magic won't happen without a bz :-) please open one describing what's
> >> requested.
> >>
> >>
> >>>
> >>>
> >>> Nir
> >>>
> >>> >
> >>> > Could you add this feature to the next release of 3.6 branch?
> >>> >
> >>> > Thanks
> >>> > Luiz
> >>> >
> >>> > Em ter, 12 de abr de 2016 05:03, Sandro Bonazzola <
> sbona...@redhat.com>
> >>> > escreveu:
> >>> >>
> >>> >> On Mon, Apr 11, 2016 at 11:44 PM, Bond, Darryl  >
> >>> >> wrote:
> >>> >>>
> >>> >>> My setup is hyperconverged. I have placed my test results in
> >>> >>> https://bugzilla.redhat.com/show_bug.cgi?id=1298693
> >>> >>>
> >>> >>
> >>> >> Ok, so you're aware about the limitation of the single point of
> >>> >> failure.
> >>> >> If you drop the host referenced in hosted engine configuration for
> the
> >>> >> initial setup it won't be able to connect to shared storage even if
> >>> >> the
> >>> >> other hosts in the cluster are up since the entry point is down.
> >>> >> Note that hyperconverged deployment is not supported in 3.6.
> >>> >>
> >>> >>
> >>> >>>
> >>> >>>
> >>> >>> Short description of setup:
> >>> >>>
> >>> >>> 3 hosts with 2 disks each set up with gluster replica 3 across the
> 6
> >>> >>> disks volume name hosted-engine.
> >>> >>>
> >>> >>> Hostname hosted-storage configured in /etc//hosts to point to the
> >>> >>> host1.
> >>> >>>
> >>> >>> Installed hosted engine on host1 with the hosted engine storage
> path
> >>> >>> =
> >>> >>> hosted-storage:/hosted-engine
> >>> >>>
> >>> >>> Install first engine on h1 successful. Hosts h2 and h3 added to the
> >>> >>> hosted engine. All works fine.
> >>> >>>
> >>> >>> Additional storage and non-hosted

Re: [ovirt-users] Hosted engine on gluster problem

2016-04-14 Thread Sandro Bonazzola
On Thu, Apr 14, 2016 at 7:07 PM, Luiz Claudio Prazeres Goncalves <
luiz...@gmail.com> wrote:

> Sandro, any word here? Btw, I'm not talking about hyperconvergency in this
> case, but 3 external gluster nodes using replica 3
>
> Regards
> Luiz
>
> Em qua, 13 de abr de 2016 10:34, Luiz Claudio Prazeres Goncalves <
> luiz...@gmail.com> escreveu:
>
>> Nir, here is the problem:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1298693
>>
>> When you do a hosted-engine --deploy and pick "glusterfs" you don't have
>> a way to define the mount options, therefore, the use of the
>> "backupvol-server", however when you create a storage domain from the UI
>> you can, like the attached screen shot.
>>
>>
>> In the hosted-engine --deploy, I would expect a flow which includes not
>> only the "gluster" entrypoint, but also the gluster mount options which is
>> missing today. This option would be optional, but would remove the single
>> point of failure described on the Bug 1298693.
>>
>> for example:
>>
>> Existing entry point on the "hosted-engine --deploy" flow
>> gluster1.xyz.com:/engine
>>
>>
>> Missing option on the "hosted-engine --deploy" flow :
>> backupvolfile-server=gluster2.xyz.com
>> ,fetch-attempts=3,log-level=WARNING,log-file=/var/log/glusterfs/gluster_engine_domain.log
>>
>> ​Sandro, it seems to me a simple solution which can be easily fixed.
>>
>> What do you think?
>>
>

The whole integration team is currently busy and we have not enough
resources to handle this for 3.6.6 time frame.
We'll be happy to help reviewing patches but we have more urgent items to
handle right now.




>
>> Regards
>> -Luiz​
>>
>>
>>
>> 2016-04-13 4:15 GMT-03:00 Sandro Bonazzola :
>>
>>>
>>>
>>> On Tue, Apr 12, 2016 at 6:47 PM, Nir Soffer  wrote:
>>>
 On Tue, Apr 12, 2016 at 3:05 PM, Luiz Claudio Prazeres Goncalves
  wrote:
 > Hi Sandro, I've been using gluster with 3 external hosts for a while
 and
 > things are working pretty well, however this single point of failure
 looks
 > like a simple feature to implement,but critical to anyone who wants
 to use
 > gluster on production  . This is not hyperconvergency which has other
 > issues/implications. So , why not have this feature out on 3.6
 branch? It
 > looks like just let vdsm use the 'backupvol-server' option when
 mounting the
 > engine domain and make the property tests.

 Can you explain what is the problem, and what is the suggested solution?

 Engine and vdsm already support the backupvol-server option - you can
 define this option in the storage domain options when you create a
 gluster
 storage domain. With this option vdsm should be able to connect to
 gluster
 storage domain even if a brick is down.

 If you don't have this option in engine , you probably cannot add it
 with hosted
 engine setup, since for editing it you must put the storage domain in
 maintenance
 and if you do this the engine vm will be killed :-) This is is one of
 the issues with
 engine managing the storage domain it runs on.

 I think the best way to avoid this issue, is to add a DNS entry
 providing the addresses
 of all the gluster bricks, and use this address for the gluster
 storage domain. This way
 the glusterfs mount helper can mount the domain even if one of the
 gluster bricks
 are down.

 Again, we will need some magic from the hosted engine developers to
 modify the
 address of the hosted engine gluster domain on existing system.

>>>
>>> Magic won't happen without a bz :-) please open one describing what's
>>> requested.
>>>
>>>
>>>

 Nir

 >
 > Could you add this feature to the next release of 3.6 branch?
 >
 > Thanks
 > Luiz
 >
 > Em ter, 12 de abr de 2016 05:03, Sandro Bonazzola <
 sbona...@redhat.com>
 > escreveu:
 >>
 >> On Mon, Apr 11, 2016 at 11:44 PM, Bond, Darryl 
 >> wrote:
 >>>
 >>> My setup is hyperconverged. I have placed my test results in
 >>> https://bugzilla.redhat.com/show_bug.cgi?id=1298693
 >>>
 >>
 >> Ok, so you're aware about the limitation of the single point of
 failure.
 >> If you drop the host referenced in hosted engine configuration for
 the
 >> initial setup it won't be able to connect to shared storage even if
 the
 >> other hosts in the cluster are up since the entry point is down.
 >> Note that hyperconverged deployment is not supported in 3.6.
 >>
 >>
 >>>
 >>>
 >>> Short description of setup:
 >>>
 >>> 3 hosts with 2 disks each set up with gluster replica 3 across the 6
 >>> disks volume name hosted-engine.
 >>>
 >>> Hostname hosted-storage configured in /etc//hosts to point to the
 host1.
 >>>
 >>> Installed hosted engine on host1 with the hosted engine storage
 path =
 >>> hosted-storage:/hosted-engine
 

[ovirt-users] serial console and permission

2016-04-14 Thread Nathanaël Blanchet

Hi all,

I've successfully set up the serial console feature for all my vms.
But the only way I found to make it work is to add each user as a 
UserVmManager role, whereas they have the SuperUser role at the 
datacenter level.I know there is an open bug on it for this.
A second bug is that adding a group with UserVmManager as permission on 
a vm (instead of a simple user) doesn't allow to get the serial console.

Thank you for your help
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HA agent fails to start

2016-04-14 Thread Simone Tiraboschi
On Thu, Apr 14, 2016 at 10:38 PM, Simone Tiraboschi  wrote:
> On Thu, Apr 14, 2016 at 6:53 PM, Richard Neuboeck  
> wrote:
>> On 14.04.16 18:46, Simone Tiraboschi wrote:
>>> On Thu, Apr 14, 2016 at 4:04 PM, Richard Neuboeck  
>>> wrote:
 On 04/14/2016 02:14 PM, Simone Tiraboschi wrote:
> On Thu, Apr 14, 2016 at 12:51 PM, Richard Neuboeck
>  wrote:
>> On 04/13/2016 10:00 AM, Simone Tiraboschi wrote:
>>> On Wed, Apr 13, 2016 at 9:38 AM, Richard Neuboeck 
>>>  wrote:
 The answers file shows the setup time of both machines.

 On both machines hosted-engine.conf got rotated right before I wrote
 this mail. Is it possible that I managed to interrupt the rotation with
 the reboot so the backup was accurate but the update not yet written to
 hosted-engine.conf?
>>>
>>> AFAIK we don't have any rotation mechanism for that file; something
>>> else you have in place on that host?
>>
>> Those machines are all CentOS 7.2 minimal installs. The only
>> adaptation I do is installing vim, removing postfix and installing
>> exim, removing firewalld and installing iptables-service. Then I add
>> the oVirt repos (3.6 and 3.6-snapshot) and deploy the host.
>>
>> But checking lsof shows that 'ovirt-ha-agent --no-daemon' has access
>> to the config file (and the one ending with ~):
>>
>> # lsof | grep 'hosted-engine.conf~'
>> ovirt-ha- 193446   vdsm  351u  REG
>> 253,01021135070683
>> /etc/ovirt-hosted-engine/hosted-engine.conf~
>
> This is not that much relevant if the file was renamed after
> ovirt-ha-agent opened it.
> Try this:
>
> [root@c72he20160405h1 ovirt-hosted-engine-setup]# tail -n1 -f
> /etc/ovirt-hosted-engine/hosted-engine.conf &
> [1] 28866
> [root@c72he20160405h1 ovirt-hosted-engine-setup]# port=
>
> [root@c72he20160405h1 ovirt-hosted-engine-setup]# lsof | grep 
> hosted-engine.conf
> tail  28866  root3r  REG
> 253,0  10141595898 /etc/ovirt-hosted-engine/hosted-engine.conf
> [root@c72he20160405h1 ovirt-hosted-engine-setup]# mv
> /etc/ovirt-hosted-engine/hosted-engine.conf
> /etc/ovirt-hosted-engine/hosted-engine.conf_123
> [root@c72he20160405h1 ovirt-hosted-engine-setup]# lsof | grep 
> hosted-engine.conf
> tail  28866  root3r  REG
> 253,0  10141595898
> /etc/ovirt-hosted-engine/hosted-engine.conf_123
> [root@c72he20160405h1 ovirt-hosted-engine-setup]#
>

 I've issued the commands you suggested but I don't know how that
 helps to find the process accessing the config files.

 After moving the hosted-engine.conf file the HA agent crashed
 logging the information that the config file is not available.

 Here is the output from every command:

 # tail -n1 -f /etc/ovirt-hosted-engine/hosted-engine.conf &
 [1] 167865
 [root@cube-two ~]# port=
 # lsof | grep hosted-engine.conf
 ovirt-ha- 166609   vdsm5u  REG
 253,01021134433491
 /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
 ovirt-ha- 166609   vdsm7u  REG
 253,01021134433453
 /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
 ovirt-ha- 166609   vdsm8u  REG
 253,01021134433489
 /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
 ovirt-ha- 166609   vdsm9u  REG
 253,01021134433493
 /etc/ovirt-hosted-engine/hosted-engine.conf~
 ovirt-ha- 166609   vdsm   10u  REG
 253,01021134433495
 /etc/ovirt-hosted-engine/hosted-engine.conf
 tail  167865   root3r  REG
 253,01021134433493
 /etc/ovirt-hosted-engine/hosted-engine.conf~
 # mv /etc/ovirt-hosted-engine/hosted-engine.conf
 /etc/ovirt-hosted-engine/hosted-engine.conf_123
 # lsof | grep hosted-engine.conf
 ovirt-ha- 166609   vdsm5u  REG
 253,01021134433491
 /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
 ovirt-ha- 166609   vdsm7u  REG
 253,01021134433453
 /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
 ovirt-ha- 166609   vdsm8u  REG
 253,01021134433489
 /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
 ovirt-ha- 166609   vdsm9u  REG
 253,01021134433493
 /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
 ovirt-ha- 166609   vdsm   10u  REG
 253,01021134433495
 /etc/ovirt-hosted-engine/hosted-eng

Re: [ovirt-users] HA agent fails to start

2016-04-14 Thread Simone Tiraboschi
On Thu, Apr 14, 2016 at 6:53 PM, Richard Neuboeck  wrote:
> On 14.04.16 18:46, Simone Tiraboschi wrote:
>> On Thu, Apr 14, 2016 at 4:04 PM, Richard Neuboeck  
>> wrote:
>>> On 04/14/2016 02:14 PM, Simone Tiraboschi wrote:
 On Thu, Apr 14, 2016 at 12:51 PM, Richard Neuboeck
  wrote:
> On 04/13/2016 10:00 AM, Simone Tiraboschi wrote:
>> On Wed, Apr 13, 2016 at 9:38 AM, Richard Neuboeck 
>>  wrote:
>>> The answers file shows the setup time of both machines.
>>>
>>> On both machines hosted-engine.conf got rotated right before I wrote
>>> this mail. Is it possible that I managed to interrupt the rotation with
>>> the reboot so the backup was accurate but the update not yet written to
>>> hosted-engine.conf?
>>
>> AFAIK we don't have any rotation mechanism for that file; something
>> else you have in place on that host?
>
> Those machines are all CentOS 7.2 minimal installs. The only
> adaptation I do is installing vim, removing postfix and installing
> exim, removing firewalld and installing iptables-service. Then I add
> the oVirt repos (3.6 and 3.6-snapshot) and deploy the host.
>
> But checking lsof shows that 'ovirt-ha-agent --no-daemon' has access
> to the config file (and the one ending with ~):
>
> # lsof | grep 'hosted-engine.conf~'
> ovirt-ha- 193446   vdsm  351u  REG
> 253,01021135070683
> /etc/ovirt-hosted-engine/hosted-engine.conf~

 This is not that much relevant if the file was renamed after
 ovirt-ha-agent opened it.
 Try this:

 [root@c72he20160405h1 ovirt-hosted-engine-setup]# tail -n1 -f
 /etc/ovirt-hosted-engine/hosted-engine.conf &
 [1] 28866
 [root@c72he20160405h1 ovirt-hosted-engine-setup]# port=

 [root@c72he20160405h1 ovirt-hosted-engine-setup]# lsof | grep 
 hosted-engine.conf
 tail  28866  root3r  REG
 253,0  10141595898 /etc/ovirt-hosted-engine/hosted-engine.conf
 [root@c72he20160405h1 ovirt-hosted-engine-setup]# mv
 /etc/ovirt-hosted-engine/hosted-engine.conf
 /etc/ovirt-hosted-engine/hosted-engine.conf_123
 [root@c72he20160405h1 ovirt-hosted-engine-setup]# lsof | grep 
 hosted-engine.conf
 tail  28866  root3r  REG
 253,0  10141595898
 /etc/ovirt-hosted-engine/hosted-engine.conf_123
 [root@c72he20160405h1 ovirt-hosted-engine-setup]#

>>>
>>> I've issued the commands you suggested but I don't know how that
>>> helps to find the process accessing the config files.
>>>
>>> After moving the hosted-engine.conf file the HA agent crashed
>>> logging the information that the config file is not available.
>>>
>>> Here is the output from every command:
>>>
>>> # tail -n1 -f /etc/ovirt-hosted-engine/hosted-engine.conf &
>>> [1] 167865
>>> [root@cube-two ~]# port=
>>> # lsof | grep hosted-engine.conf
>>> ovirt-ha- 166609   vdsm5u  REG
>>> 253,01021134433491
>>> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
>>> ovirt-ha- 166609   vdsm7u  REG
>>> 253,01021134433453
>>> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
>>> ovirt-ha- 166609   vdsm8u  REG
>>> 253,01021134433489
>>> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
>>> ovirt-ha- 166609   vdsm9u  REG
>>> 253,01021134433493
>>> /etc/ovirt-hosted-engine/hosted-engine.conf~
>>> ovirt-ha- 166609   vdsm   10u  REG
>>> 253,01021134433495
>>> /etc/ovirt-hosted-engine/hosted-engine.conf
>>> tail  167865   root3r  REG
>>> 253,01021134433493
>>> /etc/ovirt-hosted-engine/hosted-engine.conf~
>>> # mv /etc/ovirt-hosted-engine/hosted-engine.conf
>>> /etc/ovirt-hosted-engine/hosted-engine.conf_123
>>> # lsof | grep hosted-engine.conf
>>> ovirt-ha- 166609   vdsm5u  REG
>>> 253,01021134433491
>>> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
>>> ovirt-ha- 166609   vdsm7u  REG
>>> 253,01021134433453
>>> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
>>> ovirt-ha- 166609   vdsm8u  REG
>>> 253,01021134433489
>>> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
>>> ovirt-ha- 166609   vdsm9u  REG
>>> 253,01021134433493
>>> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
>>> ovirt-ha- 166609   vdsm   10u  REG
>>> 253,01021134433495
>>> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
>>> ovirt-ha- 166609   vdsm   12u  REG
>>> 253,01021134433498
>>> /etc/ovirt-hosted-engine/hosted-engine.conf~

[ovirt-users] VM migrations via san replication?

2016-04-14 Thread At Work
I have a question regarding migration of VMs.  It's my hope that someone
can tell me if my migration idea can work or if it is not possible

I want to migrate about 100-200 VMs from one oVirt deployment to a new
oVirt deployment.  Some of the VMs are over 3TB in size.  Exporting and
importing these via NFS would involve downtime and probably be a very
lengthy process.  I'm wondering if there's a way to get around this by
using san replication.

I have an equallogic san group currently in use by an oVirt
installation 3.4.0-1.el6.  I have a different equallogic group set up as
storage for a different oVirt installation on another network.  The new
oVirt installation is version 3.6.4.1-1.el7.centos.

My idea is to avoid the downtime and lengthy export/import process by
telling my current production san to do volume replication to the new san
that's been allocated for use as storage for our new oVirt installation.
Once replication is in sync, I'll have exact copies of the original iSCSI
volumes, VM's, metadata, etc.  I am hoping I can then break replication,
tell my new oVirt installation to log into the freshly-copied volumes, and
use the VMs that were previously in use on the other installation.

Due to the differences in oVirt versions and not knowing how oVirt
internally handles metadata, I am not sure if this is possible.  Can anyone
confirm if this will work or not?

Thank you all!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] SR-IOV: "ethX" interfaces not getting cleaned-up and virtio stops working

2016-04-14 Thread Matthew Trent
I've been experimenting with SR-IOV. I have a network with two vNIC profiles, 
one for passthrough and one for virtio. Per this video:
https://www.youtube.com/watch?v=A-MROZ8D06Y

I think I should be able to do "mixed mode" using SR-IOV and virtio on the same 
physical NIC. It does work, initially.

But if I flip the VM between the two vNIC profiles, eventually the virtio one 
stops passing traffic. And I've noticed the Network Interfaces tab on that host 
shows an increasing number of eth0, eth1, eth2, eth3 interfaces, all with the 
MAC address of the VM. An equal number of interfaces has been deducted from the 
p3p1_x VF list. I'm guessing this is related...? See attached screenshot.

This is a Dell R530 and the NIC is an Intel X540. 

OS Version: RHEL - 7 - 2.1511.el7.centos.2.10
Kernel Version: 3.10.0 - 327.13.1.el7.x86_64
KVM Version: 2.3.0 - 31.el7_2.7.1
LIBVIRT Version: libvirt-1.2.17-13.el7_2.4
VDSM Version: vdsm-4.17.23.2-1.el7

--
Matthew Trent
Network Engineer
Lewis County IT Services___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Add a physical drive to Ovirt

2016-04-14 Thread Bryan Hughes
All,

I am trying to add a physical drive to Ovirt.  For instance, my physical 
machine has 3 extra hard drives attached and I want to add it directly instead 
of through Gluster or iSCSI.  Is this possible?

I tried using POSIX by creating an xfs filesystem on /dev/sde1 and tried to add 
that as a POSIX datastore but it fails in trying to mount the filesystem.

Any help is appreciated.

Thanks,
Bryan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine on gluster problem

2016-04-14 Thread Nir Soffer
On Wed, Apr 13, 2016 at 4:34 PM, Luiz Claudio Prazeres Goncalves
 wrote:
> Nir, here is the problem:
> https://bugzilla.redhat.com/show_bug.cgi?id=1298693
>
> When you do a hosted-engine --deploy and pick "glusterfs" you don't have a
> way to define the mount options, therefore, the use of the
> "backupvol-server", however when you create a storage domain from the UI you
> can, like the attached screen shot.
>
>
> In the hosted-engine --deploy, I would expect a flow which includes not only
> the "gluster" entrypoint, but also the gluster mount options which is
> missing today. This option would be optional, but would remove the single
> point of failure described on the Bug 1298693.
>
> for example:
>
> Existing entry point on the "hosted-engine --deploy" flow
> gluster1.xyz.com:/engine

I agree, this feature must be supported.

> Missing option on the "hosted-engine --deploy" flow :
> backupvolfile-server=gluster2.xyz.com,fetch-attempts=3,log-level=WARNING,log-file=/var/log/glusterfs/gluster_engine_domain.log
>
> Sandro, it seems to me a simple solution which can be easily fixed.
>
> What do you think?
>
> Regards
> -Luiz
>
>
>
> 2016-04-13 4:15 GMT-03:00 Sandro Bonazzola :
>>
>>
>>
>> On Tue, Apr 12, 2016 at 6:47 PM, Nir Soffer  wrote:
>>>
>>> On Tue, Apr 12, 2016 at 3:05 PM, Luiz Claudio Prazeres Goncalves
>>>  wrote:
>>> > Hi Sandro, I've been using gluster with 3 external hosts for a while
>>> > and
>>> > things are working pretty well, however this single point of failure
>>> > looks
>>> > like a simple feature to implement,but critical to anyone who wants to
>>> > use
>>> > gluster on production  . This is not hyperconvergency which has other
>>> > issues/implications. So , why not have this feature out on 3.6 branch?
>>> > It
>>> > looks like just let vdsm use the 'backupvol-server' option when
>>> > mounting the
>>> > engine domain and make the property tests.
>>>
>>> Can you explain what is the problem, and what is the suggested solution?
>>>
>>> Engine and vdsm already support the backupvol-server option - you can
>>> define this option in the storage domain options when you create a
>>> gluster
>>> storage domain. With this option vdsm should be able to connect to
>>> gluster
>>> storage domain even if a brick is down.
>>>
>>> If you don't have this option in engine , you probably cannot add it with
>>> hosted
>>> engine setup, since for editing it you must put the storage domain in
>>> maintenance
>>> and if you do this the engine vm will be killed :-) This is is one of
>>> the issues with
>>> engine managing the storage domain it runs on.
>>>
>>> I think the best way to avoid this issue, is to add a DNS entry
>>> providing the addresses
>>> of all the gluster bricks, and use this address for the gluster
>>> storage domain. This way
>>> the glusterfs mount helper can mount the domain even if one of the
>>> gluster bricks
>>> are down.
>>>
>>> Again, we will need some magic from the hosted engine developers to
>>> modify the
>>> address of the hosted engine gluster domain on existing system.
>>
>>
>> Magic won't happen without a bz :-) please open one describing what's
>> requested.
>>
>>
>>>
>>>
>>> Nir
>>>
>>> >
>>> > Could you add this feature to the next release of 3.6 branch?
>>> >
>>> > Thanks
>>> > Luiz
>>> >
>>> > Em ter, 12 de abr de 2016 05:03, Sandro Bonazzola 
>>> > escreveu:
>>> >>
>>> >> On Mon, Apr 11, 2016 at 11:44 PM, Bond, Darryl 
>>> >> wrote:
>>> >>>
>>> >>> My setup is hyperconverged. I have placed my test results in
>>> >>> https://bugzilla.redhat.com/show_bug.cgi?id=1298693
>>> >>>
>>> >>
>>> >> Ok, so you're aware about the limitation of the single point of
>>> >> failure.
>>> >> If you drop the host referenced in hosted engine configuration for the
>>> >> initial setup it won't be able to connect to shared storage even if
>>> >> the
>>> >> other hosts in the cluster are up since the entry point is down.
>>> >> Note that hyperconverged deployment is not supported in 3.6.
>>> >>
>>> >>
>>> >>>
>>> >>>
>>> >>> Short description of setup:
>>> >>>
>>> >>> 3 hosts with 2 disks each set up with gluster replica 3 across the 6
>>> >>> disks volume name hosted-engine.
>>> >>>
>>> >>> Hostname hosted-storage configured in /etc//hosts to point to the
>>> >>> host1.
>>> >>>
>>> >>> Installed hosted engine on host1 with the hosted engine storage path
>>> >>> =
>>> >>> hosted-storage:/hosted-engine
>>> >>>
>>> >>> Install first engine on h1 successful. Hosts h2 and h3 added to the
>>> >>> hosted engine. All works fine.
>>> >>>
>>> >>> Additional storage and non-hosted engine hosts added etc.
>>> >>>
>>> >>> Additional VMs added to hosted-engine storage (oVirt Reports VM and
>>> >>> Cinder VM). Additional VM's are hosted by other storage - cinder and
>>> >>> NFS.
>>> >>>
>>> >>> The system is in production.
>>> >>>
>>> >>>
>>> >>> Engine can be migrated around with the web interface.
>>> >>>
>>> >>>
>>> >>> - 3.6.4 upgrade released, follow the upgrade guide, engine is
>>> >>> upgraded

Re: [ovirt-users] Hosted engine on gluster problem

2016-04-14 Thread Luiz Claudio Prazeres Goncalves
Sandro, any word here? Btw, I'm not talking about hyperconvergency in this
case, but 3 external gluster nodes using replica 3

Regards
Luiz

Em qua, 13 de abr de 2016 10:34, Luiz Claudio Prazeres Goncalves <
luiz...@gmail.com> escreveu:

> Nir, here is the problem:
> https://bugzilla.redhat.com/show_bug.cgi?id=1298693
>
> When you do a hosted-engine --deploy and pick "glusterfs" you don't have a
> way to define the mount options, therefore, the use of the
> "backupvol-server", however when you create a storage domain from the UI
> you can, like the attached screen shot.
>
>
> In the hosted-engine --deploy, I would expect a flow which includes not
> only the "gluster" entrypoint, but also the gluster mount options which is
> missing today. This option would be optional, but would remove the single
> point of failure described on the Bug 1298693.
>
> for example:
>
> Existing entry point on the "hosted-engine --deploy" flow
> gluster1.xyz.com:/engine
>
>
> Missing option on the "hosted-engine --deploy" flow :
> backupvolfile-server=gluster2.xyz.com
> ,fetch-attempts=3,log-level=WARNING,log-file=/var/log/glusterfs/gluster_engine_domain.log
>
> ​Sandro, it seems to me a simple solution which can be easily fixed.
>
> What do you think?
>
> Regards
> -Luiz​
>
>
>
> 2016-04-13 4:15 GMT-03:00 Sandro Bonazzola :
>
>>
>>
>> On Tue, Apr 12, 2016 at 6:47 PM, Nir Soffer  wrote:
>>
>>> On Tue, Apr 12, 2016 at 3:05 PM, Luiz Claudio Prazeres Goncalves
>>>  wrote:
>>> > Hi Sandro, I've been using gluster with 3 external hosts for a while
>>> and
>>> > things are working pretty well, however this single point of failure
>>> looks
>>> > like a simple feature to implement,but critical to anyone who wants to
>>> use
>>> > gluster on production  . This is not hyperconvergency which has other
>>> > issues/implications. So , why not have this feature out on 3.6 branch?
>>> It
>>> > looks like just let vdsm use the 'backupvol-server' option when
>>> mounting the
>>> > engine domain and make the property tests.
>>>
>>> Can you explain what is the problem, and what is the suggested solution?
>>>
>>> Engine and vdsm already support the backupvol-server option - you can
>>> define this option in the storage domain options when you create a
>>> gluster
>>> storage domain. With this option vdsm should be able to connect to
>>> gluster
>>> storage domain even if a brick is down.
>>>
>>> If you don't have this option in engine , you probably cannot add it
>>> with hosted
>>> engine setup, since for editing it you must put the storage domain in
>>> maintenance
>>> and if you do this the engine vm will be killed :-) This is is one of
>>> the issues with
>>> engine managing the storage domain it runs on.
>>>
>>> I think the best way to avoid this issue, is to add a DNS entry
>>> providing the addresses
>>> of all the gluster bricks, and use this address for the gluster
>>> storage domain. This way
>>> the glusterfs mount helper can mount the domain even if one of the
>>> gluster bricks
>>> are down.
>>>
>>> Again, we will need some magic from the hosted engine developers to
>>> modify the
>>> address of the hosted engine gluster domain on existing system.
>>>
>>
>> Magic won't happen without a bz :-) please open one describing what's
>> requested.
>>
>>
>>
>>>
>>> Nir
>>>
>>> >
>>> > Could you add this feature to the next release of 3.6 branch?
>>> >
>>> > Thanks
>>> > Luiz
>>> >
>>> > Em ter, 12 de abr de 2016 05:03, Sandro Bonazzola >> >
>>> > escreveu:
>>> >>
>>> >> On Mon, Apr 11, 2016 at 11:44 PM, Bond, Darryl 
>>> >> wrote:
>>> >>>
>>> >>> My setup is hyperconverged. I have placed my test results in
>>> >>> https://bugzilla.redhat.com/show_bug.cgi?id=1298693
>>> >>>
>>> >>
>>> >> Ok, so you're aware about the limitation of the single point of
>>> failure.
>>> >> If you drop the host referenced in hosted engine configuration for the
>>> >> initial setup it won't be able to connect to shared storage even if
>>> the
>>> >> other hosts in the cluster are up since the entry point is down.
>>> >> Note that hyperconverged deployment is not supported in 3.6.
>>> >>
>>> >>
>>> >>>
>>> >>>
>>> >>> Short description of setup:
>>> >>>
>>> >>> 3 hosts with 2 disks each set up with gluster replica 3 across the 6
>>> >>> disks volume name hosted-engine.
>>> >>>
>>> >>> Hostname hosted-storage configured in /etc//hosts to point to the
>>> host1.
>>> >>>
>>> >>> Installed hosted engine on host1 with the hosted engine storage path
>>> =
>>> >>> hosted-storage:/hosted-engine
>>> >>>
>>> >>> Install first engine on h1 successful. Hosts h2 and h3 added to the
>>> >>> hosted engine. All works fine.
>>> >>>
>>> >>> Additional storage and non-hosted engine hosts added etc.
>>> >>>
>>> >>> Additional VMs added to hosted-engine storage (oVirt Reports VM and
>>> >>> Cinder VM). Additional VM's are hosted by other storage - cinder and
>>> NFS.
>>> >>>
>>> >>> The system is in production.
>>> >>>
>>> >>>
>>> >>> Engine can be migrated around with the web interface

Re: [ovirt-users] HA agent fails to start

2016-04-14 Thread Simone Tiraboschi
On Thu, Apr 14, 2016 at 4:04 PM, Richard Neuboeck  wrote:
> On 04/14/2016 02:14 PM, Simone Tiraboschi wrote:
>> On Thu, Apr 14, 2016 at 12:51 PM, Richard Neuboeck
>>  wrote:
>>> On 04/13/2016 10:00 AM, Simone Tiraboschi wrote:
 On Wed, Apr 13, 2016 at 9:38 AM, Richard Neuboeck  
 wrote:
> The answers file shows the setup time of both machines.
>
> On both machines hosted-engine.conf got rotated right before I wrote
> this mail. Is it possible that I managed to interrupt the rotation with
> the reboot so the backup was accurate but the update not yet written to
> hosted-engine.conf?

 AFAIK we don't have any rotation mechanism for that file; something
 else you have in place on that host?
>>>
>>> Those machines are all CentOS 7.2 minimal installs. The only
>>> adaptation I do is installing vim, removing postfix and installing
>>> exim, removing firewalld and installing iptables-service. Then I add
>>> the oVirt repos (3.6 and 3.6-snapshot) and deploy the host.
>>>
>>> But checking lsof shows that 'ovirt-ha-agent --no-daemon' has access
>>> to the config file (and the one ending with ~):
>>>
>>> # lsof | grep 'hosted-engine.conf~'
>>> ovirt-ha- 193446   vdsm  351u  REG
>>> 253,01021135070683
>>> /etc/ovirt-hosted-engine/hosted-engine.conf~
>>
>> This is not that much relevant if the file was renamed after
>> ovirt-ha-agent opened it.
>> Try this:
>>
>> [root@c72he20160405h1 ovirt-hosted-engine-setup]# tail -n1 -f
>> /etc/ovirt-hosted-engine/hosted-engine.conf &
>> [1] 28866
>> [root@c72he20160405h1 ovirt-hosted-engine-setup]# port=
>>
>> [root@c72he20160405h1 ovirt-hosted-engine-setup]# lsof | grep 
>> hosted-engine.conf
>> tail  28866  root3r  REG
>> 253,0  10141595898 /etc/ovirt-hosted-engine/hosted-engine.conf
>> [root@c72he20160405h1 ovirt-hosted-engine-setup]# mv
>> /etc/ovirt-hosted-engine/hosted-engine.conf
>> /etc/ovirt-hosted-engine/hosted-engine.conf_123
>> [root@c72he20160405h1 ovirt-hosted-engine-setup]# lsof | grep 
>> hosted-engine.conf
>> tail  28866  root3r  REG
>> 253,0  10141595898
>> /etc/ovirt-hosted-engine/hosted-engine.conf_123
>> [root@c72he20160405h1 ovirt-hosted-engine-setup]#
>>
>
> I've issued the commands you suggested but I don't know how that
> helps to find the process accessing the config files.
>
> After moving the hosted-engine.conf file the HA agent crashed
> logging the information that the config file is not available.
>
> Here is the output from every command:
>
> # tail -n1 -f /etc/ovirt-hosted-engine/hosted-engine.conf &
> [1] 167865
> [root@cube-two ~]# port=
> # lsof | grep hosted-engine.conf
> ovirt-ha- 166609   vdsm5u  REG
> 253,01021134433491
> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
> ovirt-ha- 166609   vdsm7u  REG
> 253,01021134433453
> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
> ovirt-ha- 166609   vdsm8u  REG
> 253,01021134433489
> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
> ovirt-ha- 166609   vdsm9u  REG
> 253,01021134433493
> /etc/ovirt-hosted-engine/hosted-engine.conf~
> ovirt-ha- 166609   vdsm   10u  REG
> 253,01021134433495
> /etc/ovirt-hosted-engine/hosted-engine.conf
> tail  167865   root3r  REG
> 253,01021134433493
> /etc/ovirt-hosted-engine/hosted-engine.conf~
> # mv /etc/ovirt-hosted-engine/hosted-engine.conf
> /etc/ovirt-hosted-engine/hosted-engine.conf_123
> # lsof | grep hosted-engine.conf
> ovirt-ha- 166609   vdsm5u  REG
> 253,01021134433491
> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
> ovirt-ha- 166609   vdsm7u  REG
> 253,01021134433453
> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
> ovirt-ha- 166609   vdsm8u  REG
> 253,01021134433489
> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
> ovirt-ha- 166609   vdsm9u  REG
> 253,01021134433493
> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
> ovirt-ha- 166609   vdsm   10u  REG
> 253,01021134433495
> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
> ovirt-ha- 166609   vdsm   12u  REG
> 253,01021134433498
> /etc/ovirt-hosted-engine/hosted-engine.conf~
> ovirt-ha- 166609   vdsm   13u  REG
> 253,01021134433499
> /etc/ovirt-hosted-engine/hosted-engine.conf_123
> tail  167865   root3r  REG
> 253,01021134433493
> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
>
>
>> The issue

[ovirt-users] attaching a storage domain to a datacenter

2016-04-14 Thread Fabrice Bacchella
I'm trying to attach a SAN domain to data center.

It's an old domain that is re-attached. Import works fine.

But doing the attachement failed, either throug the UI or REST API call.

The log of the api calls are :

> POST /api/datacenters/92fbe5d6-2920-401d-b69b-ad4568e4f407/storagedomains 
> HTTP/1.1
> Host: example.com:1443
> Authorization: XXX
> User-Agent: PycURL/7.43.0 libcurl/7.46.0 OpenSSL/1.0.2e zlib/1.2.8
> Cookie: 
> Version: 3
> Content-Type: application/xml
> Accept: application/xml
> Filter: False
> Prefer: persistent-auth
> Content-Length: 60
> 
* upload completely sent off: 60 out of 60 bytes
* ?
< HTTP/1.1 400 Bad Request
< Date: Thu, 14 Apr 2016 16:31:48 GMT
< Server: Apache
< Content-Type: application/xml
< Content-Length: 135
< JSESSIONID: R4TTWw65M5TIhbVJo8VBuOOV
< Connection: close
< 
* ?
< 
< 
< Operation Failed
< []
< 

There is nothing in the engine.log :
2016-04-14 18:00:57,831 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default 
task-23) [] Correlation ID: 59497e82, Job ID: 
84e0e22c-bb28-434b-a952-73f0e2c4cfaa, Call Stack: null, Custom Event ID: -1, 
Message: Failed to attach Storage Domain vmsys01 to Data Center en01. (User: 
FA4@apachesso)

Where can I find informations about that ?


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration fails

2016-04-14 Thread Charles Tassell

Hi Nick,

  I had this problem myself a while ago and it turned out the issue was 
DNS related (one of the hosts couldn't do a DNS lookup on the name 
registered to the other host so it failed with a strange error.)  The 
best way to diagnose a migration failure is probably with the 
/var/log/vdsm/vdsm.log file (might be vdsmd instead of vdsm)  I'd 
recommend ssh'ing into both hosts and run the following command:


 tail -f /var/log/vdsm/vdsm.log |egrep -v 'DEBUG|INFO' |tee 
/tmp/migrate.log


Then attempt the migration.  When the GUI says the migration has failed 
hit Control-C in both windows to stop capturing the log. You can then go 
through the logfiles (stored in /tmp/migrate.log) to find the actual 
error message and post it to the list.  If you can't find the error you 
might want to upload the logfiles somewhere and post the URLs to the 
list so some of the devs or power users can better diagnose the problem.


On 16-04-14 01:00 PM, users-requ...@ovirt.org wrote:

Date: Thu, 14 Apr 2016 16:35:34 +0200
From: Sandro Bonazzola 
To: Nick Vercampt 
Cc: users 
Subject: Re: [ovirt-users] Live migration fails
Message-ID:

Content-Type: text/plain; charset="utf-8"

On Thu, Apr 14, 2016 at 2:14 PM, Nick Vercampt 
wrote:


Dear Sirs

I'm writing to ask a question about the live migration on my oVirt setup.

I'm currently running oVirt 3.6 on a virtual test enviroment with 1
default cluster (2 hosts, CentOS 7)  and 1 Gluster enabled cluster (with 2
virtual storage nodes, also CentOS7).

My datacenter has a shared data and iso volume for the two hosts (both
GlusterFS)

Problem:
When i try to migrate my VM (Tiny Linux) from host1 to host2 the operation
fails.

Question:
What log should I check to find a more detailed error message or do you
have an idea what the problem might be?



Googling around, I found:
- http://vaunaspada.babel.it/blog/?p=613
- http://comments.gmane.org/gmane.comp.emulators.ovirt.user/32963

I suggest to start from there. Maybe someone can write a page in ovirt
website about how to diagnose live migration issues.



Kind Regards

Nick Vercampt


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration fails

2016-04-14 Thread Sandro Bonazzola
On Thu, Apr 14, 2016 at 2:14 PM, Nick Vercampt 
wrote:

> Dear Sirs
>
> I'm writing to ask a question about the live migration on my oVirt setup.
>
> I'm currently running oVirt 3.6 on a virtual test enviroment with 1
> default cluster (2 hosts, CentOS 7)  and 1 Gluster enabled cluster (with 2
> virtual storage nodes, also CentOS7).
>
> My datacenter has a shared data and iso volume for the two hosts (both
> GlusterFS)
>
> Problem:
> When i try to migrate my VM (Tiny Linux) from host1 to host2 the operation
> fails.
>
> Question:
> What log should I check to find a more detailed error message or do you
> have an idea what the problem might be?
>
>
Googling around, I found:
- http://vaunaspada.babel.it/blog/?p=613
- http://comments.gmane.org/gmane.comp.emulators.ovirt.user/32963

I suggest to start from there. Maybe someone can write a page in ovirt
website about how to diagnose live migration issues.


>
> Kind Regards
>
> Nick Vercampt
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HA agent fails to start

2016-04-14 Thread Richard Neuboeck
On 04/14/2016 02:14 PM, Simone Tiraboschi wrote:
> On Thu, Apr 14, 2016 at 12:51 PM, Richard Neuboeck
>  wrote:
>> On 04/13/2016 10:00 AM, Simone Tiraboschi wrote:
>>> On Wed, Apr 13, 2016 at 9:38 AM, Richard Neuboeck  
>>> wrote:
 The answers file shows the setup time of both machines.

 On both machines hosted-engine.conf got rotated right before I wrote
 this mail. Is it possible that I managed to interrupt the rotation with
 the reboot so the backup was accurate but the update not yet written to
 hosted-engine.conf?
>>>
>>> AFAIK we don't have any rotation mechanism for that file; something
>>> else you have in place on that host?
>>
>> Those machines are all CentOS 7.2 minimal installs. The only
>> adaptation I do is installing vim, removing postfix and installing
>> exim, removing firewalld and installing iptables-service. Then I add
>> the oVirt repos (3.6 and 3.6-snapshot) and deploy the host.
>>
>> But checking lsof shows that 'ovirt-ha-agent --no-daemon' has access
>> to the config file (and the one ending with ~):
>>
>> # lsof | grep 'hosted-engine.conf~'
>> ovirt-ha- 193446   vdsm  351u  REG
>> 253,01021135070683
>> /etc/ovirt-hosted-engine/hosted-engine.conf~
> 
> This is not that much relevant if the file was renamed after
> ovirt-ha-agent opened it.
> Try this:
> 
> [root@c72he20160405h1 ovirt-hosted-engine-setup]# tail -n1 -f
> /etc/ovirt-hosted-engine/hosted-engine.conf &
> [1] 28866
> [root@c72he20160405h1 ovirt-hosted-engine-setup]# port=
> 
> [root@c72he20160405h1 ovirt-hosted-engine-setup]# lsof | grep 
> hosted-engine.conf
> tail  28866  root3r  REG
> 253,0  10141595898 /etc/ovirt-hosted-engine/hosted-engine.conf
> [root@c72he20160405h1 ovirt-hosted-engine-setup]# mv
> /etc/ovirt-hosted-engine/hosted-engine.conf
> /etc/ovirt-hosted-engine/hosted-engine.conf_123
> [root@c72he20160405h1 ovirt-hosted-engine-setup]# lsof | grep 
> hosted-engine.conf
> tail  28866  root3r  REG
> 253,0  10141595898
> /etc/ovirt-hosted-engine/hosted-engine.conf_123
> [root@c72he20160405h1 ovirt-hosted-engine-setup]#
> 

I've issued the commands you suggested but I don't know how that
helps to find the process accessing the config files.

After moving the hosted-engine.conf file the HA agent crashed
logging the information that the config file is not available.

Here is the output from every command:

# tail -n1 -f /etc/ovirt-hosted-engine/hosted-engine.conf &
[1] 167865
[root@cube-two ~]# port=
# lsof | grep hosted-engine.conf
ovirt-ha- 166609   vdsm5u  REG
253,01021134433491
/etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
ovirt-ha- 166609   vdsm7u  REG
253,01021134433453
/etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
ovirt-ha- 166609   vdsm8u  REG
253,01021134433489
/etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
ovirt-ha- 166609   vdsm9u  REG
253,01021134433493
/etc/ovirt-hosted-engine/hosted-engine.conf~
ovirt-ha- 166609   vdsm   10u  REG
253,01021134433495
/etc/ovirt-hosted-engine/hosted-engine.conf
tail  167865   root3r  REG
253,01021134433493
/etc/ovirt-hosted-engine/hosted-engine.conf~
# mv /etc/ovirt-hosted-engine/hosted-engine.conf
/etc/ovirt-hosted-engine/hosted-engine.conf_123
# lsof | grep hosted-engine.conf
ovirt-ha- 166609   vdsm5u  REG
253,01021134433491
/etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
ovirt-ha- 166609   vdsm7u  REG
253,01021134433453
/etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
ovirt-ha- 166609   vdsm8u  REG
253,01021134433489
/etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
ovirt-ha- 166609   vdsm9u  REG
253,01021134433493
/etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
ovirt-ha- 166609   vdsm   10u  REG
253,01021134433495
/etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
ovirt-ha- 166609   vdsm   12u  REG
253,01021134433498
/etc/ovirt-hosted-engine/hosted-engine.conf~
ovirt-ha- 166609   vdsm   13u  REG
253,01021134433499
/etc/ovirt-hosted-engine/hosted-engine.conf_123
tail  167865   root3r  REG
253,01021134433493
/etc/ovirt-hosted-engine/hosted-engine.conf (deleted)


> The issue is understanding who renames that file on your host.

From what I've seen so far it looks like a child of vdsm accesses
/etc/ovirt-hosted-engine/hosted-engine.conf periodically but is not
responsible for the ~ file.

# au

[ovirt-users] Live migration fails

2016-04-14 Thread Nick Vercampt
Dear Sirs

I'm writing to ask a question about the live migration on my oVirt setup.

I'm currently running oVirt 3.6 on a virtual test enviroment with 1 default
cluster (2 hosts, CentOS 7)  and 1 Gluster enabled cluster (with 2 virtual
storage nodes, also CentOS7).

My datacenter has a shared data and iso volume for the two hosts (both
GlusterFS)

Problem:
When i try to migrate my VM (Tiny Linux) from host1 to host2 the operation
fails.

Question:
What log should I check to find a more detailed error message or do you
have an idea what the problem might be?


Kind Regards

Nick Vercampt
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HA agent fails to start

2016-04-14 Thread Simone Tiraboschi
On Thu, Apr 14, 2016 at 12:51 PM, Richard Neuboeck
 wrote:
> On 04/13/2016 10:00 AM, Simone Tiraboschi wrote:
>> On Wed, Apr 13, 2016 at 9:38 AM, Richard Neuboeck  
>> wrote:
>>> The answers file shows the setup time of both machines.
>>>
>>> On both machines hosted-engine.conf got rotated right before I wrote
>>> this mail. Is it possible that I managed to interrupt the rotation with
>>> the reboot so the backup was accurate but the update not yet written to
>>> hosted-engine.conf?
>>
>> AFAIK we don't have any rotation mechanism for that file; something
>> else you have in place on that host?
>
> Those machines are all CentOS 7.2 minimal installs. The only
> adaptation I do is installing vim, removing postfix and installing
> exim, removing firewalld and installing iptables-service. Then I add
> the oVirt repos (3.6 and 3.6-snapshot) and deploy the host.
>
> But checking lsof shows that 'ovirt-ha-agent --no-daemon' has access
> to the config file (and the one ending with ~):
>
> # lsof | grep 'hosted-engine.conf~'
> ovirt-ha- 193446   vdsm  351u  REG
> 253,01021135070683
> /etc/ovirt-hosted-engine/hosted-engine.conf~

This is not that much relevant if the file was renamed after
ovirt-ha-agent opened it.
Try this:

[root@c72he20160405h1 ovirt-hosted-engine-setup]# tail -n1 -f
/etc/ovirt-hosted-engine/hosted-engine.conf &
[1] 28866
[root@c72he20160405h1 ovirt-hosted-engine-setup]# port=

[root@c72he20160405h1 ovirt-hosted-engine-setup]# lsof | grep hosted-engine.conf
tail  28866  root3r  REG
253,0  10141595898 /etc/ovirt-hosted-engine/hosted-engine.conf
[root@c72he20160405h1 ovirt-hosted-engine-setup]# mv
/etc/ovirt-hosted-engine/hosted-engine.conf
/etc/ovirt-hosted-engine/hosted-engine.conf_123
[root@c72he20160405h1 ovirt-hosted-engine-setup]# lsof | grep hosted-engine.conf
tail  28866  root3r  REG
253,0  10141595898
/etc/ovirt-hosted-engine/hosted-engine.conf_123
[root@c72he20160405h1 ovirt-hosted-engine-setup]#

The issue is understanding who renames that file on your host.
As a thumb rule, if a file name is appended with a tilde~, it only
means that it is a backup created by a text editor or similar program.


>>> [root@cube-two ~]# ls -l /etc/ovirt-hosted-engine
>>> total 16
>>> -rw-r--r--. 1 root root 3252 Apr  8 10:35 answers.conf
>>> -rw-r--r--. 1 root root 1021 Apr 13 09:31 hosted-engine.conf
>>> -rw-r--r--. 1 root root 1021 Apr 13 09:30 hosted-engine.conf~
>>>
>>> [root@cube-three ~]# ls -l /etc/ovirt-hosted-engine
>>> total 16
>>> -rw-r--r--. 1 root root 3233 Apr 11 08:02 answers.conf
>>> -rw-r--r--. 1 root root 1002 Apr 13 09:31 hosted-engine.conf
>>> -rw-r--r--. 1 root root 1002 Apr 13 09:31 hosted-engine.conf~
>>>
>>> On 12.04.16 16:01, Simone Tiraboschi wrote:
 Everything seams fine here,
 /etc/ovirt-hosted-engine/hosted-engine.conf seams to be correctly
 created with the right name.
 Can you please check the latest modification time of your
 /etc/ovirt-hosted-engine/hosted-engine.conf~ and compare it with the
 setup time?

 On Tue, Apr 12, 2016 at 2:34 PM, Richard Neuboeck  
 wrote:
> On 04/12/2016 11:32 AM, Simone Tiraboschi wrote:
>> On Mon, Apr 11, 2016 at 8:11 AM, Richard Neuboeck 
>>  wrote:
>>> Hi oVirt Group,
>>>
>>> in my attempts to get all aspects of oVirt 3.6 up and running I
>>> stumbled upon something I'm not sure how to fix:
>>>
>>> Initially I installed a hosted engine setup. After that I added
>>> another HA host (with hosted-engine --deploy). The host was
>>> registered in the Engine correctly and HA agent came up as expected.
>>>
>>> However if I reboot the second host (through the Engine UI or
>>> manually) HA agent fails to start. The reason seems to be that
>>> /etc/ovirt-hosted-engine/hosted-engine.conf is empty. The backup
>>> file ending with ~ exists though.
>>
>> Can you please attach hosted-engine-setup logs from your additional 
>> hosts?
>> AFAIK our code will never take a ~ ending backup of that file.
>
> ovirt-hosted-engine-setup logs from both additional hosts are
> attached to this mail.
>
>>
>>> Here are the log messages from the journal:
>>> Apr 11 07:29:39 cube-two.tbi.univie.ac.at systemd[1]: Starting oVirt
>>> Hosted Engine High Availability Monitoring Agent...
>>> Apr 11 07:29:39 cube-two.tbi.univie.ac.at ovirt-ha-agent[3747]:
>>> INFO:ovirt_hosted_engine_ha.agent.agent.Agent:ovirt-hosted-engine-ha
>>> agent 1.3.5.3-0.0.master started
>>> Apr 11 07:29:39 cube-two.tbi.univie.ac.at ovirt-ha-agent[3747]:
>>> INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Found
>>> certificate common name: cube-two.tbi.univie.ac.at
>>> Apr 11 07:29:39 cube-two.tbi.univie.ac.at ovirt-ha-agent[3747]:
>>> ovirt-ha-agent
>>> ovirt_hosted_engine_ha.agent.hosted_engine.Hoste

Re: [ovirt-users] RESTAPI and kerberos authentication

2016-04-14 Thread Marcel Galke
Hi,

I've managed to get it work.
What I've done is to first run "engine-manage-domains delete" to remove
the domain and add it again using the new aaa extension tool
"ovirt-engine-extension-aaa-ldap-setup". It's not a good idea to mix
these two methods, I guess.
Restart the engine after each change.
To get rid of the double authentication for the webadmin portal I
changed in /etc/httpd/conf.d/ovirt-sso.conf

""
to
""

So Kerberos SSO will be used for the API only.
Furthermore I've given the user the role "superuser".

Best regards
Marcel

On 14.04.2016 11:44, Marcel Galke wrote:
> Hi,
> 
> I'm using curl and I followed steps in [1] and double checked the
> permissions.
> I've tested API access vs. webadmin access (see below).
> 
> $ curl -v --negotiate -X GET -H "Accept: application/xml" -k
> https://server8.funfurt.de/ovirt-engine/webadmin/?locale=de_DE
> # Result: HTTP 401
> $ kinit
> $ curl -v --negotiate -X GET -H "Accept: application/xml" -k
> https://server8.funfurt.de/ovirt-engine/webadmin/?locale=de_DE # Result:
> HTTP 200
> $ curl --negotiate -v -u : -X GET -H "Accept: application/xml" -k
> https://server8.funfurt.de/api/vms # Result: HTTP 401
> 
> Therfore I believe httpd config is fine.
> For engine.log and and properties file see attachment.
> I've also attached console output from curl.
> 
> Thanks and regards
> Marcel
> 
> On 14.04.2016 08:11, Ondra Machacek wrote:
>> On 04/14/2016 08:06 AM, Ondra Machacek wrote:
>>> On 04/13/2016 10:43 PM, Marcel Galke wrote:
 Hello,

 I need to automatically create a list of all the VMs and the storage
 path to their disks in the data center for offline storage for desaster
 recovery. We have oVirt 3.6 and IPA 4.2.0.
 To achieve this my idea was to query the API using Kerberos
 authentication and a keytab. This could then run as cronjob.
 Using username and password is not an option.

 To configure oVirt for use with IPA I've run engine-manage-domains but
 the result is not exactly what I'm looking for (despite from the fact,
 that I can add direcotry users etc.).
 Next I tried the generic LDAP provider as per documentation
 https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.5/html/Administration_Guide/sect-Directory_Users.html


>>>
>>> Just to be sure did you followed these steps[1]?
>>> If yes and it don't work, it would be nice if you can share a properties
>>> files you have and engine.log(the part when engine starts). Please also
>>> ensure twice you have correct permissions on properties files, keytab
>>> and apache confiig.
>>>
>>> Also ensure your browser is correctly setup. Example for firefox[2].
>>
>> Sorry, I've just realized you use API.
>> So do you use SDKs or curl? Make sure you use kerberos properly in both
>> cases.
>> For cur its:  curl --negotiate
>> For SDKs[1], there is a parameter 'kerberos=true' in creation of api
>> object.
>>
>> [1]
>> http://www.ovirt.org/develop/release-management/features/infra/kerberos-support-in-sdks-and-cli/
>>
>>
>>>
>>> It don't work only for API or for UserPortal and Webadmin as well? Or
>>> you set it up only for API?
>>>
>>> [1]
>>> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.5/html/Administration_Guide/sect-Directory_Users.html#sect-Single_Sign-On_to_the_Administration_and_User_Portal
>>>
>>>
>>> [2]
>>> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Deployment_Guide/sso-config-firefox.html
>>>
>>>
>>>

 It was quite easy to get Apache to authenticate against IPA, but I did
 not manage to access the API. Each try ended with an "HTTP/1.1 401
 Unauthorized".
 At the moment Apache authentication appears first and then the RESTAPI
 auth dialog comes up.
 Some facts about my setup:
 oVirt Host:
 -OS: CentOS 6.7
 -Engine Version: 3.6
 IPA Host:
 -OS: CentOS 7.2
 -IPA Version: 4.2.0


 I might mix some things up. Please help me to find out how to achieve my
 goal. I can provide more information if required.

 Thanks a lot!


 Best regards
 Marcel
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HA agent fails to start

2016-04-14 Thread Richard Neuboeck
On 04/13/2016 10:00 AM, Simone Tiraboschi wrote:
> On Wed, Apr 13, 2016 at 9:38 AM, Richard Neuboeck  
> wrote:
>> The answers file shows the setup time of both machines.
>>
>> On both machines hosted-engine.conf got rotated right before I wrote
>> this mail. Is it possible that I managed to interrupt the rotation with
>> the reboot so the backup was accurate but the update not yet written to
>> hosted-engine.conf?
> 
> AFAIK we don't have any rotation mechanism for that file; something
> else you have in place on that host?

Those machines are all CentOS 7.2 minimal installs. The only
adaptation I do is installing vim, removing postfix and installing
exim, removing firewalld and installing iptables-service. Then I add
the oVirt repos (3.6 and 3.6-snapshot) and deploy the host.

But checking lsof shows that 'ovirt-ha-agent --no-daemon' has access
to the config file (and the one ending with ~):

# lsof | grep 'hosted-engine.conf~'
ovirt-ha- 193446   vdsm  351u  REG
253,01021135070683
/etc/ovirt-hosted-engine/hosted-engine.conf~


>> [root@cube-two ~]# ls -l /etc/ovirt-hosted-engine
>> total 16
>> -rw-r--r--. 1 root root 3252 Apr  8 10:35 answers.conf
>> -rw-r--r--. 1 root root 1021 Apr 13 09:31 hosted-engine.conf
>> -rw-r--r--. 1 root root 1021 Apr 13 09:30 hosted-engine.conf~
>>
>> [root@cube-three ~]# ls -l /etc/ovirt-hosted-engine
>> total 16
>> -rw-r--r--. 1 root root 3233 Apr 11 08:02 answers.conf
>> -rw-r--r--. 1 root root 1002 Apr 13 09:31 hosted-engine.conf
>> -rw-r--r--. 1 root root 1002 Apr 13 09:31 hosted-engine.conf~
>>
>> On 12.04.16 16:01, Simone Tiraboschi wrote:
>>> Everything seams fine here,
>>> /etc/ovirt-hosted-engine/hosted-engine.conf seams to be correctly
>>> created with the right name.
>>> Can you please check the latest modification time of your
>>> /etc/ovirt-hosted-engine/hosted-engine.conf~ and compare it with the
>>> setup time?
>>>
>>> On Tue, Apr 12, 2016 at 2:34 PM, Richard Neuboeck  
>>> wrote:
 On 04/12/2016 11:32 AM, Simone Tiraboschi wrote:
> On Mon, Apr 11, 2016 at 8:11 AM, Richard Neuboeck  
> wrote:
>> Hi oVirt Group,
>>
>> in my attempts to get all aspects of oVirt 3.6 up and running I
>> stumbled upon something I'm not sure how to fix:
>>
>> Initially I installed a hosted engine setup. After that I added
>> another HA host (with hosted-engine --deploy). The host was
>> registered in the Engine correctly and HA agent came up as expected.
>>
>> However if I reboot the second host (through the Engine UI or
>> manually) HA agent fails to start. The reason seems to be that
>> /etc/ovirt-hosted-engine/hosted-engine.conf is empty. The backup
>> file ending with ~ exists though.
>
> Can you please attach hosted-engine-setup logs from your additional hosts?
> AFAIK our code will never take a ~ ending backup of that file.

 ovirt-hosted-engine-setup logs from both additional hosts are
 attached to this mail.

>
>> Here are the log messages from the journal:
>> Apr 11 07:29:39 cube-two.tbi.univie.ac.at systemd[1]: Starting oVirt
>> Hosted Engine High Availability Monitoring Agent...
>> Apr 11 07:29:39 cube-two.tbi.univie.ac.at ovirt-ha-agent[3747]:
>> INFO:ovirt_hosted_engine_ha.agent.agent.Agent:ovirt-hosted-engine-ha
>> agent 1.3.5.3-0.0.master started
>> Apr 11 07:29:39 cube-two.tbi.univie.ac.at ovirt-ha-agent[3747]:
>> INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Found
>> certificate common name: cube-two.tbi.univie.ac.at
>> Apr 11 07:29:39 cube-two.tbi.univie.ac.at ovirt-ha-agent[3747]:
>> ovirt-ha-agent
>> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Hosted
>> Engine is not configured. Shutting down.
>> Apr 11 07:29:39 cube-two.tbi.univie.ac.at ovirt-ha-agent[3747]:
>> ERROR:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Hosted
>> Engine is not configured. Shutting down.
>> Apr 11 07:29:39 cube-two.tbi.univie.ac.at ovirt-ha-agent[3747]:
>> INFO:ovirt_hosted_engine_ha.agent.agent.Agent:Agent shutting down
>> Apr 11 07:29:39 cube-two.tbi.univie.ac.at systemd[1]:
>> ovirt-ha-agent.service: main process exited, code=exited, status=255/n/a
>>
>> If I restore the configuration from the backup file and manually
>> restart the HA agent it's working properly.
>>
>> For testing purposes I added a third HA host which turn out to
>> behave exactly the same.
>>
>> Any help would be appreciated!
>> Thanks
>> Cheers
>> Richard
>>
>> --
>> /dev/null
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>


 --
 /dev/null
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.o

Re: [ovirt-users] Importing An Ubuntu Disk

2016-04-14 Thread Marcin Michta

  
  
Hi,

I have one method - maybe not so nice, but works:
Create new VM on a NFS storage with disk type and size of disk which
you want to import.
In GUI find ID of new created disk. Copy image to the storage and
replace image in the correct directory based on the ID of disk (ex.
/export/nfs/$STORAGE_ID/images/$DISK_ID/$DISK_ID_FILE )


On 14.04.2016 12:01, Charles Tassell
  wrote:

Hello,
  
  
    I was wondering if there is any way to import a disk image
  (qcow/qcow2) into an oVirt storage domain?  I've tried v2v but it
  won't work because the image customization parts of it won't deal
  with Ubuntu, and I tried import-to-ovirt.pl but the disks it
  creates seem to be broken in some way that prevents them from
  booting when attached to a VM.
  
  
    I've seen some references to creating a pre-allocated disk of
  the same size and then using dd to overwrite the contents of it,
  but is there a better method?  Or should I just import my existing
  VMs by booting off a system rescue CD and restoring backups over
  the network?
  
  
  
  ___
  
  Users mailing list
  
  Users@ovirt.org
  
  http://lists.ovirt.org/mailman/listinfo/users
  


-- 
  
  
 
-- 
  

  

  Marcin Michta
Systems
  & Network Administrator

 
-
  E: marcin.mic...@codilime.com
  -
CodiLime Sp. z
o.o. - Ltd. company with its registered office
in Poland, 01-167 Warsaw, ul. Zawiszy 14/97.
Registered by The District Court for the Capital
City of Warsaw, XII Commercial Department of the
National Court Register. Entered into National
Court Register under No. KRS 388871. Tax
identification number (NIP) 5272657478.
Statistical number (REGON) 142974628.
-
The information
in this email is confidential and may be legally
privileged, it may contain information that is
confidential in CodiLime Sp. z o.o. It is
intended solely for the addressee. Any access to
this email by third parties is unauthorized. If
you are not the intended recipient of this
message, any disclosure, copying, distribution
or any action undertaken or neglected in
reliance thereon is prohibited and may result in
your liability for damages.
  

  


  

  

  

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] RESTAPI and kerberos authentication

2016-04-14 Thread Ondra Machacek
The issue is most probably that your user don't have permissions to 
login/see vms in oVirt.
Just login as admin@internal to webadmin and assign user 'aaa' some 
permissions.

Here[1] is example how to work with virtual machine permissions.

[1] 
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.5/html/Administration_Guide/sect-Virtual_Machines_and_Permissions.html


On 04/14/2016 11:44 AM, Marcel Galke wrote:

Hi,

I'm using curl and I followed steps in [1] and double checked the
permissions.
I've tested API access vs. webadmin access (see below).

$ curl -v --negotiate -X GET -H "Accept: application/xml" -k
https://server8.funfurt.de/ovirt-engine/webadmin/?locale=de_DE
# Result: HTTP 401
$ kinit
$ curl -v --negotiate -X GET -H "Accept: application/xml" -k
https://server8.funfurt.de/ovirt-engine/webadmin/?locale=de_DE # Result:
HTTP 200
$ curl --negotiate -v -u : -X GET -H "Accept: application/xml" -k
https://server8.funfurt.de/api/vms # Result: HTTP 401

Therfore I believe httpd config is fine.
For engine.log and and properties file see attachment.
I've also attached console output from curl.

Thanks and regards
Marcel

On 14.04.2016 08:11, Ondra Machacek wrote:

On 04/14/2016 08:06 AM, Ondra Machacek wrote:

On 04/13/2016 10:43 PM, Marcel Galke wrote:

Hello,

I need to automatically create a list of all the VMs and the storage
path to their disks in the data center for offline storage for desaster
recovery. We have oVirt 3.6 and IPA 4.2.0.
To achieve this my idea was to query the API using Kerberos
authentication and a keytab. This could then run as cronjob.
Using username and password is not an option.

To configure oVirt for use with IPA I've run engine-manage-domains but
the result is not exactly what I'm looking for (despite from the fact,
that I can add direcotry users etc.).
Next I tried the generic LDAP provider as per documentation
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.5/html/Administration_Guide/sect-Directory_Users.html




Just to be sure did you followed these steps[1]?
If yes and it don't work, it would be nice if you can share a properties
files you have and engine.log(the part when engine starts). Please also
ensure twice you have correct permissions on properties files, keytab
and apache confiig.

Also ensure your browser is correctly setup. Example for firefox[2].


Sorry, I've just realized you use API.
So do you use SDKs or curl? Make sure you use kerberos properly in both
cases.
For cur its:  curl --negotiate
For SDKs[1], there is a parameter 'kerberos=true' in creation of api
object.

[1]
http://www.ovirt.org/develop/release-management/features/infra/kerberos-support-in-sdks-and-cli/




It don't work only for API or for UserPortal and Webadmin as well? Or
you set it up only for API?

[1]
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.5/html/Administration_Guide/sect-Directory_Users.html#sect-Single_Sign-On_to_the_Administration_and_User_Portal


[2]
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Deployment_Guide/sso-config-firefox.html





It was quite easy to get Apache to authenticate against IPA, but I did
not manage to access the API. Each try ended with an "HTTP/1.1 401
Unauthorized".
At the moment Apache authentication appears first and then the RESTAPI
auth dialog comes up.
Some facts about my setup:
oVirt Host:
-OS: CentOS 6.7
-Engine Version: 3.6
IPA Host:
-OS: CentOS 7.2
-IPA Version: 4.2.0


I might mix some things up. Please help me to find out how to achieve my
goal. I can provide more information if required.

Thanks a lot!


Best regards
Marcel
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM has been paused due to no Storage space error.

2016-04-14 Thread Nir Soffer
On Thu, Apr 14, 2016 at 1:23 PM,   wrote:
> Hi Nir,
>
> El 2016-04-14 11:02, Nir Soffer escribió:
>>
>> On Thu, Apr 14, 2016 at 12:38 PM, Fred Rolland 
>> wrote:
>>>
>>> Nir,
>>> See attached the repoplot output.
>>
>>
>> So we have about one concurrent lvm command without any disk operations,
>> and
>> everything seems snappy.
>>
>> Nicolás, maybe this storage or the host is overloaded by the vms? Are your
>> vms
>> doing lot of io?
>>
>
> Not that I know, actually it should have been a "calm" time slot as far as
> IOs go, nor the storage was overloaded at that time. If I'm not mistaken, on
> the repoplot report I see there are two LVM operations at a time, maybe that
> has something to do with it?

The operation that took about 50 seconds started in the same time that another
operation started, but it does not explain why several other lvm comands took
about 15 seconds each.

> (although as you say, the lvextend is just a
> metadata change...)
>
>
>> lvextend operation should be very fast operation, this is just a
>> metadata change,
>> allocating couple of extents to that lv.
>>
>> Zdenek, how do you suggest to debug slow lvm commands?
>>
>> See the attached pdf, lvm commands took 15-50 seconds.
>>
>>>
>>> On Thu, Apr 14, 2016 at 12:18 PM, Nir Soffer  wrote:


 On Thu, Apr 14, 2016 at 12:02 PM, Fred Rolland 
 wrote:
 > From the log, we can see that the lvextend command took 18 sec, which
 > is
 > quite long.

 Fred, can you run repoplot on this log file? it will may explain why
 this
 lvm
 call took 18 seconds.

 Nir

 >
 > 60decf0c-6d9a-4c3b-bee6-de9d2ff05e85::DEBUG::2016-04-13
 > 10:52:06,759::lvm::290::Storage.Misc.excCmd::(cmd) /usr/bin/taskset
 > --cpu-list 0-23 /usr/bin/sudo -n /usr/sbin/lvm lvextend --config '
 > devices {
 > preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1
 > write_cache_state=0 disable_after_error_count=3 filter = [
 > '\''a|/dev/mapper/36000eb3a4f1acbc20043|'\'',
 > '\''r|.*|'\''
 > ] }
 > global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1
 > use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } '
 > --autobackup
 > n --size 6016m
 >
 >
 > 5de4a000-a9c4-489c-8eee-10368647c413/721d09bc-60e7-4310-9ba2-522d2a4b03d0
 > (cwd None)
 > 
 > 60decf0c-6d9a-4c3b-bee6-de9d2ff05e85::DEBUG::2016-04-13
 > 10:52:22,217::lvm::290::Storage.Misc.excCmd::(cmd) SUCCESS:  = '
 > WARNING: lvmetad is running but disabled. Restart lvmetad before
 > enabling
 > it!\n  WARNING: This metadata update is NOT backed up\n';  = 0
 >
 >
 > The watermark can be configured by the following value:
 >
 > 'volume_utilization_percent', '50',
 > 'Together with volume_utilization_chunk_mb, set the minimal free '
 > 'space before a thin provisioned block volume is extended. Use '
 > 'lower values to extend earlier.')
 >
 > On Thu, Apr 14, 2016 at 11:42 AM, Michal Skrivanek
 >  wrote:
 >>
 >>
 >> > On 14 Apr 2016, at 09:57, nico...@devels.es wrote:
 >> >
 >> > Ok, that makes sense, thanks for the insight both Alex and Fred.
 >> > I'm
 >> > attaching the VDSM log of the SPM node at the time of the pause. I
 >> > couldn't
 >> > find anything that would clearly identify the problem, but maybe
 >> > you'll be
 >> > able to.
 >>
 >> In extreme conditions it will happen. When your storage is slow to
 >> respond
 >> to extension request, and when your write rate is very high then it
 >> may
 >> happen, as it is happening to you, that you run out space sooner than
 >> the
 >> extension finishes. You can change the watermark value I guess(right,
 >> Fred?), but better would be to plan a bit more ahead and either use
 >> preallocated or create thin and then allocate expected size in
 >> advance
 >> before the operation causing it (typically it only happens during
 >> untarring
 >> gigabytes of data, or huge database dump/restore)
 >> Even then, the VM should always be automatially resumed once the disk
 >> space is allocated
 >>
 >> Thanks,
 >> michal
 >>
 >> >
 >> > Thanks.
 >> >
 >> > Regards.
 >> >
 >> > El 2016-04-13 13:09, Fred Rolland escribió:
 >> >> Hi,
 >> >> Yes, just as Alex explained, if the disk has been created as thin
 >> >> provisioning, the vdsm will extends once a watermark is reached.
 >> >> Usually it should not get to the state the Vm is paused.
 >> >> From the log, you can see that the request for extension has been
 >> >> sent
 >> >> before the VM got to the No Space Error.
 >> >> Later, we can see the VM resuming.
 >> >> INFO::2016-04-13
 >> >> 10:52:04,182::vm::1026::virt.vm::(extendDrivesIfNeeded)
 >> >> vmId=`f9cd282e-110a-4896-98d3-6d320662744d`::Requ

Re: [ovirt-users] VM has been paused due to no Storage space error.

2016-04-14 Thread nicolas

Hi Nir,

El 2016-04-14 11:02, Nir Soffer escribió:
On Thu, Apr 14, 2016 at 12:38 PM, Fred Rolland  
wrote:

Nir,
See attached the repoplot output.


So we have about one concurrent lvm command without any disk 
operations, and

everything seems snappy.

Nicolás, maybe this storage or the host is overloaded by the vms? Are 
your vms

doing lot of io?



Not that I know, actually it should have been a "calm" time slot as far 
as IOs go, nor the storage was overloaded at that time. If I'm not 
mistaken, on the repoplot report I see there are two LVM operations at a 
time, maybe that has something to do with it? (although as you say, the 
lvextend is just a metadata change...)



lvextend operation should be very fast operation, this is just a
metadata change,
allocating couple of extents to that lv.

Zdenek, how do you suggest to debug slow lvm commands?

See the attached pdf, lvm commands took 15-50 seconds.



On Thu, Apr 14, 2016 at 12:18 PM, Nir Soffer  
wrote:


On Thu, Apr 14, 2016 at 12:02 PM, Fred Rolland 
wrote:
> From the log, we can see that the lvextend command took 18 sec, which is
> quite long.

Fred, can you run repoplot on this log file? it will may explain why 
this

lvm
call took 18 seconds.

Nir

>
> 60decf0c-6d9a-4c3b-bee6-de9d2ff05e85::DEBUG::2016-04-13
> 10:52:06,759::lvm::290::Storage.Misc.excCmd::(cmd) /usr/bin/taskset
> --cpu-list 0-23 /usr/bin/sudo -n /usr/sbin/lvm lvextend --config '
> devices {
> preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1
> write_cache_state=0 disable_after_error_count=3 filter = [
> '\''a|/dev/mapper/36000eb3a4f1acbc20043|'\'', '\''r|.*|'\''
> ] }
> global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1
> use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } '
> --autobackup
> n --size 6016m
>
> 5de4a000-a9c4-489c-8eee-10368647c413/721d09bc-60e7-4310-9ba2-522d2a4b03d0
> (cwd None)
> 
> 60decf0c-6d9a-4c3b-bee6-de9d2ff05e85::DEBUG::2016-04-13
> 10:52:22,217::lvm::290::Storage.Misc.excCmd::(cmd) SUCCESS:  = '
> WARNING: lvmetad is running but disabled. Restart lvmetad before
> enabling
> it!\n  WARNING: This metadata update is NOT backed up\n';  = 0
>
>
> The watermark can be configured by the following value:
>
> 'volume_utilization_percent', '50',
> 'Together with volume_utilization_chunk_mb, set the minimal free '
> 'space before a thin provisioned block volume is extended. Use '
> 'lower values to extend earlier.')
>
> On Thu, Apr 14, 2016 at 11:42 AM, Michal Skrivanek
>  wrote:
>>
>>
>> > On 14 Apr 2016, at 09:57, nico...@devels.es wrote:
>> >
>> > Ok, that makes sense, thanks for the insight both Alex and Fred. I'm
>> > attaching the VDSM log of the SPM node at the time of the pause. I
>> > couldn't
>> > find anything that would clearly identify the problem, but maybe
>> > you'll be
>> > able to.
>>
>> In extreme conditions it will happen. When your storage is slow to
>> respond
>> to extension request, and when your write rate is very high then it may
>> happen, as it is happening to you, that you run out space sooner than
>> the
>> extension finishes. You can change the watermark value I guess(right,
>> Fred?), but better would be to plan a bit more ahead and either use
>> preallocated or create thin and then allocate expected size in advance
>> before the operation causing it (typically it only happens during
>> untarring
>> gigabytes of data, or huge database dump/restore)
>> Even then, the VM should always be automatially resumed once the disk
>> space is allocated
>>
>> Thanks,
>> michal
>>
>> >
>> > Thanks.
>> >
>> > Regards.
>> >
>> > El 2016-04-13 13:09, Fred Rolland escribió:
>> >> Hi,
>> >> Yes, just as Alex explained, if the disk has been created as thin
>> >> provisioning, the vdsm will extends once a watermark is reached.
>> >> Usually it should not get to the state the Vm is paused.
>> >> From the log, you can see that the request for extension has been
>> >> sent
>> >> before the VM got to the No Space Error.
>> >> Later, we can see the VM resuming.
>> >> INFO::2016-04-13
>> >> 10:52:04,182::vm::1026::virt.vm::(extendDrivesIfNeeded)
>> >> vmId=`f9cd282e-110a-4896-98d3-6d320662744d`::Requesting extension
>> >> for
>> >> volume
>> >> 
>> >> INFO::2016-04-13 10:52:29,360::vm::3728::virt.vm::(onIOError)
>> >> vmId=`f9cd282e-110a-4896-98d3-6d320662744d`::abnormal vm stop device
>> >> virtio-disk0 error enospc
>> >> 
>> >> INFO::2016-04-13
>> >> 10:52:54,317::vm::5084::virt.vm::(_logGuestCpuStatus)
>> >> vmId=`f9cd282e-110a-4896-98d3-6d320662744d`::CPU running: onResume
>> >> Note that the extension is done on the SPM host, so it would be
>> >> interesting to see the vdsm log from the host that was in SPM role
>> >> at
>> >> this timeframe.
>> >> Regards,
>> >> Fred
>> >> On Wed, Apr 13, 2016 at 2:43 PM, Alex Crow 
>> >> wrote:
>> >>> Hi,
>> >>> If you have set up VM disks as Thin Provisioned, the VM has to
>> >>> pause when the disk image needs to expand. You won't see thi

Re: [ovirt-users] Educational use case question

2016-04-14 Thread Yedidyah Bar David
On Thu, Apr 14, 2016 at 11:27 AM, Gianluca Cecchi
 wrote:
> On Thu, Apr 14, 2016 at 8:15 AM, Yedidyah Bar David  wrote:
>>
>> On Thu, Apr 14, 2016 at 5:18 AM, Michael Hall  wrote:
>>
>>
>> 3. NFS
>> loop-back mounting nfs is considered risky, due to potential locking
>> issues. Therefore, if you want to use NFS, you are better off doing
>> something like this:
>>
>
> Hello,
> can you give more details about these potential locking issues? So that I
> can reproduce

Most of what I know about this is:

https://lwn.net/Articles/595652/

> I have 2 little environments where I'm using this kind of setup. In one of
> them the hypervisor is a physical server, in the other one the hypervisor is
> itself a libvirt VM inside a Fedora 23 based laptop. oVirt version is 3.6.4
> on both.
>
> The test VM has 2 disks sda and sdb; all ovirt related stuff on sdb
>
> My raw steps for the lab have been, after setting up CentOS 7.2 OS,
> disabling ipv6 and NetworkManager, putting SELinux to permissive and
> enabling ovirt repo:

selinux enforcing should work too, if it fails please open a bug. Thanks.

You might have to set the right contexts for your local disks.

>
> NOTE: I also stop and disable firewalld
>
> My host is ovc72.localdomain.local and name of my future engine
> shengine.localdomain.local
>
> yum -y update
>
> yum install ovirt-hosted-engine-setup ovirt-engine-appliance
>
> yum install rpcbind nfs-utils nfs-server
> (some of them probably already pulled in as dependencies from previous
> command)
>
> When I start from scratch the system
>
> pvcreate /dev/sdb
> vgcreate OVIRT_DOMAIN /dev/sdb
> lvcreate -n ISO_DOMAIN -L 5G OVIRT_DOMAIN
> lvcreate -n SHE_DOMAIN -L 25G OVIRT_DOMAIN
> lvcreate -n NFS_DOMAIN -l +100%FREE OVIRT_DOMAIN
>
> if I only have to reinitialize I start from here
> mkfs -t xfs -f /dev/mapper/OVIRT_DOMAIN-ISO_DOMAIN
> mkfs -t xfs -f /dev/mapper/OVIRT_DOMAIN-NFS_DOMAIN
> mkfs -t xfs -f /dev/mapper/OVIRT_DOMAIN-SHE_DOMAIN
>
> mkdir /ISO_DOMAIN /NFS_DOMAIN /SHE_DOMAIN
>
> /etc/fstab
> /dev/mapper/OVIRT_DOMAIN-ISO_DOMAIN /ISO_DOMAIN xfs defaults0 0
> /dev/mapper/OVIRT_DOMAIN-NFS_DOMAIN /NFS_DOMAIN xfs defaults0 0
> /dev/mapper/OVIRT_DOMAIN-SHE_DOMAIN /SHE_DOMAIN xfs defaults0 0
>
> mount /ISO_DOMAIN/--> this for ISO images
> mount /NFS_DOMAIN/   ---> this for data storage domain where your VMs will
> live (NFS based)
> mount /SHE_DOMAIN/   --> this is for the HE VM
>
> chown 36:36 /ISO_DOMAIN
> chown 36:36 /NFS_DOMAIN
> chown 36:36 /SHE_DOMAIN
>
> chmod 0755 /ISO_DOMAIN
> chmod 0755 /NFS_DOMAIN
> chmod 0755 /SHE_DOMAIN
>
> /etc/exports
> /ISO_DOMAIN   *(rw,anonuid=36,anongid=36,all_squash)
> /NFS_DOMAIN   *(rw,anonuid=36,anongid=36,all_squash)
> /SHE_DOMAIN   *(rw,anonuid=36,anongid=36,all_squash)
>
> systemctl enable rpcbind
> systemctl start rpcbind
>
> systemctl enable nfs-server
> systemctl start nfs-server
>
> hosted-engine --deploy
>
> During setup I choose:
>
>   Engine FQDN: shengine.localdomain.local
>
>   Firewall manager   : iptables
>
>   Storage connection :
> ovc71.localdomain.local:/SHE_DOMAIN
>
>   OVF archive (for disk boot):
> /usr/share/ovirt-engine-appliance/ovirt-engine-appliance-20151015.0-1.el7.centos.ova
>
> Also, I used the appliance provided by ovirt-engine-appliance package
>
> After install you have to make a dependency so that VDSM Broker starts after
> NFS Server
>
> In /usr/lib/systemd/system/ovirt-ha-broker.service
>
> Added in section  [Unit] the line:
>
> After=nfs-server.service
>
> Also in file vdsmd.service changed from:
> After=multipathd.service libvirtd.service iscsid.service rpcbind.service \
>   supervdsmd.service sanlock.service vdsm-network.service
>
> to:
> After=multipathd.service libvirtd.service iscsid.service rpcbind.service \
>   supervdsmd.service sanlock.service vdsm-network.service \
>   nfs-server.service
>
> NOTE: the files will be overwritten by future updates, so you have to keep
> in mind...
>
> On ovc72 in /etc/multipath.conf aright after line
> # VDSM REVISION 1.3
>
> added
> # RHEV PRIVATE
>
> blacklist {
> wwid 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1
> wwid 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0
> }
>
> To exclude both 2 internal drives... probably oVirt keeps in mind only the
> first one?

No idea

> Otherwise many messages like:
> Jan 25 11:02:00 ovc72 kernel: device-mapper: table: 253:6: multipath: error
> getting device
> Jan 25 11:02:00 ovc72 kernel: device-mapper: ioctl: error adding target to
> table
>
> So far I didn't find any problems. Only a little trick when you have to make
> ful lmaintenance where you have to power off the (only) hypervisor, where
> you have to make the right order steps.

I guess you can probably script that too...

Thanks for sharing. As wrote above, no personal experience with loopback nfs.
For the multipath question, if interested, perhaps ask

Re: [ovirt-users] VM has been paused due to no Storage space error.

2016-04-14 Thread Nir Soffer
On Thu, Apr 14, 2016 at 12:38 PM, Fred Rolland  wrote:
> Nir,
> See attached the repoplot output.

So we have about one concurrent lvm command without any disk operations, and
everything seems snappy.

Nicolás, maybe this storage or the host is overloaded by the vms? Are your vms
doing lot of io?

lvextend operation should be very fast operation, this is just a
metadata change,
allocating couple of extents to that lv.

Zdenek, how do you suggest to debug slow lvm commands?

See the attached pdf, lvm commands took 15-50 seconds.

>
> On Thu, Apr 14, 2016 at 12:18 PM, Nir Soffer  wrote:
>>
>> On Thu, Apr 14, 2016 at 12:02 PM, Fred Rolland 
>> wrote:
>> > From the log, we can see that the lvextend command took 18 sec, which is
>> > quite long.
>>
>> Fred, can you run repoplot on this log file? it will may explain why this
>> lvm
>> call took 18 seconds.
>>
>> Nir
>>
>> >
>> > 60decf0c-6d9a-4c3b-bee6-de9d2ff05e85::DEBUG::2016-04-13
>> > 10:52:06,759::lvm::290::Storage.Misc.excCmd::(cmd) /usr/bin/taskset
>> > --cpu-list 0-23 /usr/bin/sudo -n /usr/sbin/lvm lvextend --config '
>> > devices {
>> > preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1
>> > write_cache_state=0 disable_after_error_count=3 filter = [
>> > '\''a|/dev/mapper/36000eb3a4f1acbc20043|'\'', '\''r|.*|'\''
>> > ] }
>> > global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1
>> > use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } '
>> > --autobackup
>> > n --size 6016m
>> >
>> > 5de4a000-a9c4-489c-8eee-10368647c413/721d09bc-60e7-4310-9ba2-522d2a4b03d0
>> > (cwd None)
>> > 
>> > 60decf0c-6d9a-4c3b-bee6-de9d2ff05e85::DEBUG::2016-04-13
>> > 10:52:22,217::lvm::290::Storage.Misc.excCmd::(cmd) SUCCESS:  = '
>> > WARNING: lvmetad is running but disabled. Restart lvmetad before
>> > enabling
>> > it!\n  WARNING: This metadata update is NOT backed up\n';  = 0
>> >
>> >
>> > The watermark can be configured by the following value:
>> >
>> > 'volume_utilization_percent', '50',
>> > 'Together with volume_utilization_chunk_mb, set the minimal free '
>> > 'space before a thin provisioned block volume is extended. Use '
>> > 'lower values to extend earlier.')
>> >
>> > On Thu, Apr 14, 2016 at 11:42 AM, Michal Skrivanek
>> >  wrote:
>> >>
>> >>
>> >> > On 14 Apr 2016, at 09:57, nico...@devels.es wrote:
>> >> >
>> >> > Ok, that makes sense, thanks for the insight both Alex and Fred. I'm
>> >> > attaching the VDSM log of the SPM node at the time of the pause. I
>> >> > couldn't
>> >> > find anything that would clearly identify the problem, but maybe
>> >> > you'll be
>> >> > able to.
>> >>
>> >> In extreme conditions it will happen. When your storage is slow to
>> >> respond
>> >> to extension request, and when your write rate is very high then it may
>> >> happen, as it is happening to you, that you run out space sooner than
>> >> the
>> >> extension finishes. You can change the watermark value I guess(right,
>> >> Fred?), but better would be to plan a bit more ahead and either use
>> >> preallocated or create thin and then allocate expected size in advance
>> >> before the operation causing it (typically it only happens during
>> >> untarring
>> >> gigabytes of data, or huge database dump/restore)
>> >> Even then, the VM should always be automatially resumed once the disk
>> >> space is allocated
>> >>
>> >> Thanks,
>> >> michal
>> >>
>> >> >
>> >> > Thanks.
>> >> >
>> >> > Regards.
>> >> >
>> >> > El 2016-04-13 13:09, Fred Rolland escribió:
>> >> >> Hi,
>> >> >> Yes, just as Alex explained, if the disk has been created as thin
>> >> >> provisioning, the vdsm will extends once a watermark is reached.
>> >> >> Usually it should not get to the state the Vm is paused.
>> >> >> From the log, you can see that the request for extension has been
>> >> >> sent
>> >> >> before the VM got to the No Space Error.
>> >> >> Later, we can see the VM resuming.
>> >> >> INFO::2016-04-13
>> >> >> 10:52:04,182::vm::1026::virt.vm::(extendDrivesIfNeeded)
>> >> >> vmId=`f9cd282e-110a-4896-98d3-6d320662744d`::Requesting extension
>> >> >> for
>> >> >> volume
>> >> >> 
>> >> >> INFO::2016-04-13 10:52:29,360::vm::3728::virt.vm::(onIOError)
>> >> >> vmId=`f9cd282e-110a-4896-98d3-6d320662744d`::abnormal vm stop device
>> >> >> virtio-disk0 error enospc
>> >> >> 
>> >> >> INFO::2016-04-13
>> >> >> 10:52:54,317::vm::5084::virt.vm::(_logGuestCpuStatus)
>> >> >> vmId=`f9cd282e-110a-4896-98d3-6d320662744d`::CPU running: onResume
>> >> >> Note that the extension is done on the SPM host, so it would be
>> >> >> interesting to see the vdsm log from the host that was in SPM role
>> >> >> at
>> >> >> this timeframe.
>> >> >> Regards,
>> >> >> Fred
>> >> >> On Wed, Apr 13, 2016 at 2:43 PM, Alex Crow 
>> >> >> wrote:
>> >> >>> Hi,
>> >> >>> If you have set up VM disks as Thin Provisioned, the VM has to
>> >> >>> pause when the disk image needs to expand. You won't see this on
>> >> >>> VMs
>> >> >>> with preallocated storage.
>> >

[ovirt-users] Importing An Ubuntu Disk

2016-04-14 Thread Charles Tassell

Hello,

  I was wondering if there is any way to import a disk image 
(qcow/qcow2) into an oVirt storage domain?  I've tried v2v but it won't 
work because the image customization parts of it won't deal with Ubuntu, 
and I tried import-to-ovirt.pl but the disks it creates seem to be 
broken in some way that prevents them from booting when attached to a VM.


  I've seen some references to creating a pre-allocated disk of the 
same size and then using dd to overwrite the contents of it, but is 
there a better method?  Or should I just import my existing VMs by 
booting off a system rescue CD and restoring backups over the network?



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] RESTAPI and kerberos authentication

2016-04-14 Thread Marcel Galke
Hi,

I'm using curl and I followed steps in [1] and double checked the
permissions.
I've tested API access vs. webadmin access (see below).

$ curl -v --negotiate -X GET -H "Accept: application/xml" -k
https://server8.funfurt.de/ovirt-engine/webadmin/?locale=de_DE
# Result: HTTP 401
$ kinit
$ curl -v --negotiate -X GET -H "Accept: application/xml" -k
https://server8.funfurt.de/ovirt-engine/webadmin/?locale=de_DE # Result:
HTTP 200
$ curl --negotiate -v -u : -X GET -H "Accept: application/xml" -k
https://server8.funfurt.de/api/vms # Result: HTTP 401

Therfore I believe httpd config is fine.
For engine.log and and properties file see attachment.
I've also attached console output from curl.

Thanks and regards
Marcel

On 14.04.2016 08:11, Ondra Machacek wrote:
> On 04/14/2016 08:06 AM, Ondra Machacek wrote:
>> On 04/13/2016 10:43 PM, Marcel Galke wrote:
>>> Hello,
>>>
>>> I need to automatically create a list of all the VMs and the storage
>>> path to their disks in the data center for offline storage for desaster
>>> recovery. We have oVirt 3.6 and IPA 4.2.0.
>>> To achieve this my idea was to query the API using Kerberos
>>> authentication and a keytab. This could then run as cronjob.
>>> Using username and password is not an option.
>>>
>>> To configure oVirt for use with IPA I've run engine-manage-domains but
>>> the result is not exactly what I'm looking for (despite from the fact,
>>> that I can add direcotry users etc.).
>>> Next I tried the generic LDAP provider as per documentation
>>> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.5/html/Administration_Guide/sect-Directory_Users.html
>>>
>>>
>>
>> Just to be sure did you followed these steps[1]?
>> If yes and it don't work, it would be nice if you can share a properties
>> files you have and engine.log(the part when engine starts). Please also
>> ensure twice you have correct permissions on properties files, keytab
>> and apache confiig.
>>
>> Also ensure your browser is correctly setup. Example for firefox[2].
> 
> Sorry, I've just realized you use API.
> So do you use SDKs or curl? Make sure you use kerberos properly in both
> cases.
> For cur its:  curl --negotiate
> For SDKs[1], there is a parameter 'kerberos=true' in creation of api
> object.
> 
> [1]
> http://www.ovirt.org/develop/release-management/features/infra/kerberos-support-in-sdks-and-cli/
> 
> 
>>
>> It don't work only for API or for UserPortal and Webadmin as well? Or
>> you set it up only for API?
>>
>> [1]
>> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.5/html/Administration_Guide/sect-Directory_Users.html#sect-Single_Sign-On_to_the_Administration_and_User_Portal
>>
>>
>> [2]
>> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Deployment_Guide/sso-config-firefox.html
>>
>>
>>
>>>
>>> It was quite easy to get Apache to authenticate against IPA, but I did
>>> not manage to access the API. Each try ended with an "HTTP/1.1 401
>>> Unauthorized".
>>> At the moment Apache authentication appears first and then the RESTAPI
>>> auth dialog comes up.
>>> Some facts about my setup:
>>> oVirt Host:
>>> -OS: CentOS 6.7
>>> -Engine Version: 3.6
>>> IPA Host:
>>> -OS: CentOS 7.2
>>> -IPA Version: 4.2.0
>>>
>>>
>>> I might mix some things up. Please help me to find out how to achieve my
>>> goal. I can provide more information if required.
>>>
>>> Thanks a lot!
>>>
>>>
>>> Best regards
>>> Marcel
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users

2016-04-14 11:29:05,113 INFO  [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to /192.168.100.106
2016-04-14 11:29:08,114 INFO  [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to /192.168.100.106
2016-04-14 11:29:08,130 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ListVDSCommand] (DefaultQuartzScheduler_Worker-91) [] Command 'ListVDSCommand(HostName = server6, VdsIdAndVdsVDSCommandParametersBase:{runAsync='true', hostId='35241a8e-495f-4225-9cbd-07ebc216a8f4', vds='Host[server6,35241a8e-495f-4225-9cbd-07ebc216a8f4]'})' execution failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection failed
2016-04-14 11:29:08,130 INFO  [org.ovirt.engine.core.vdsbroker.PollVmStatsRefresher] (DefaultQuartzScheduler_Worker-91) [] Failed to fetch vms info for host 'server6' - skipping VMs monitoring.
2016-04-14 11:29:10,627 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-15) [] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: User a...@funfurt.de@profile1-http failed to log in.
2016-04-14 11:29:10,627 WARN  [org.ovirt.engine.core.bll.aaa.LoginAdminUserCommand] (default task-

Re: [ovirt-users] VM has been paused due to no Storage space error.

2016-04-14 Thread Fred Rolland
Nir,
See attached the repoplot output.

On Thu, Apr 14, 2016 at 12:18 PM, Nir Soffer  wrote:

> On Thu, Apr 14, 2016 at 12:02 PM, Fred Rolland 
> wrote:
> > From the log, we can see that the lvextend command took 18 sec, which is
> > quite long.
>
> Fred, can you run repoplot on this log file? it will may explain why this
> lvm
> call took 18 seconds.
>
> Nir
>
> >
> > 60decf0c-6d9a-4c3b-bee6-de9d2ff05e85::DEBUG::2016-04-13
> > 10:52:06,759::lvm::290::Storage.Misc.excCmd::(cmd) /usr/bin/taskset
> > --cpu-list 0-23 /usr/bin/sudo -n /usr/sbin/lvm lvextend --config '
> devices {
> > preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1
> > write_cache_state=0 disable_after_error_count=3 filter = [
> > '\''a|/dev/mapper/36000eb3a4f1acbc20043|'\'', '\''r|.*|'\''
> ] }
> > global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1
> > use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } '
> --autobackup
> > n --size 6016m
> > 5de4a000-a9c4-489c-8eee-10368647c413/721d09bc-60e7-4310-9ba2-522d2a4b03d0
> > (cwd None)
> > 
> > 60decf0c-6d9a-4c3b-bee6-de9d2ff05e85::DEBUG::2016-04-13
> > 10:52:22,217::lvm::290::Storage.Misc.excCmd::(cmd) SUCCESS:  = '
> > WARNING: lvmetad is running but disabled. Restart lvmetad before enabling
> > it!\n  WARNING: This metadata update is NOT backed up\n';  = 0
> >
> >
> > The watermark can be configured by the following value:
> >
> > 'volume_utilization_percent', '50',
> > 'Together with volume_utilization_chunk_mb, set the minimal free '
> > 'space before a thin provisioned block volume is extended. Use '
> > 'lower values to extend earlier.')
> >
> > On Thu, Apr 14, 2016 at 11:42 AM, Michal Skrivanek
> >  wrote:
> >>
> >>
> >> > On 14 Apr 2016, at 09:57, nico...@devels.es wrote:
> >> >
> >> > Ok, that makes sense, thanks for the insight both Alex and Fred. I'm
> >> > attaching the VDSM log of the SPM node at the time of the pause. I
> couldn't
> >> > find anything that would clearly identify the problem, but maybe
> you'll be
> >> > able to.
> >>
> >> In extreme conditions it will happen. When your storage is slow to
> respond
> >> to extension request, and when your write rate is very high then it may
> >> happen, as it is happening to you, that you run out space sooner than
> the
> >> extension finishes. You can change the watermark value I guess(right,
> >> Fred?), but better would be to plan a bit more ahead and either use
> >> preallocated or create thin and then allocate expected size in advance
> >> before the operation causing it (typically it only happens during
> untarring
> >> gigabytes of data, or huge database dump/restore)
> >> Even then, the VM should always be automatially resumed once the disk
> >> space is allocated
> >>
> >> Thanks,
> >> michal
> >>
> >> >
> >> > Thanks.
> >> >
> >> > Regards.
> >> >
> >> > El 2016-04-13 13:09, Fred Rolland escribió:
> >> >> Hi,
> >> >> Yes, just as Alex explained, if the disk has been created as thin
> >> >> provisioning, the vdsm will extends once a watermark is reached.
> >> >> Usually it should not get to the state the Vm is paused.
> >> >> From the log, you can see that the request for extension has been
> sent
> >> >> before the VM got to the No Space Error.
> >> >> Later, we can see the VM resuming.
> >> >> INFO::2016-04-13
> >> >> 10:52:04,182::vm::1026::virt.vm::(extendDrivesIfNeeded)
> >> >> vmId=`f9cd282e-110a-4896-98d3-6d320662744d`::Requesting extension for
> >> >> volume
> >> >> 
> >> >> INFO::2016-04-13 10:52:29,360::vm::3728::virt.vm::(onIOError)
> >> >> vmId=`f9cd282e-110a-4896-98d3-6d320662744d`::abnormal vm stop device
> >> >> virtio-disk0 error enospc
> >> >> 
> >> >> INFO::2016-04-13
> 10:52:54,317::vm::5084::virt.vm::(_logGuestCpuStatus)
> >> >> vmId=`f9cd282e-110a-4896-98d3-6d320662744d`::CPU running: onResume
> >> >> Note that the extension is done on the SPM host, so it would be
> >> >> interesting to see the vdsm log from the host that was in SPM role at
> >> >> this timeframe.
> >> >> Regards,
> >> >> Fred
> >> >> On Wed, Apr 13, 2016 at 2:43 PM, Alex Crow 
> >> >> wrote:
> >> >>> Hi,
> >> >>> If you have set up VM disks as Thin Provisioned, the VM has to
> >> >>> pause when the disk image needs to expand. You won't see this on VMs
> >> >>> with preallocated storage.
> >> >>> It's not the SAN that's running out of space, it's the VM image
> >> >>> needing to be expanded incrementally each time.
> >> >>> Cheers
> >> >>> Alex
> >> >>> On 13/04/16 12:04, nico...@devels.es wrote:
> >> >>> Hi Fred,
> >> >>> This is an iSCSI storage. I'm attaching the VDSM logs from the host
> >> >>> where this machine has been running. Should you need any further
> >> >>> info, don't hesitate to ask.
> >> >>> Thanks.
> >> >>> Regards.
> >> >>> El 2016-04-13 11:54, Fred Rolland escribió:
> >> >>> Hi,
> >> >>> What kind of storage do you have ? (ISCSI,FC,NFS...)
> >> >>> Can you provide the vdsm logs from the host where this VM runs ?
> >> >>> Thanks,
> >> >>> Fred

Re: [ovirt-users] VM has been paused due to no Storage space error.

2016-04-14 Thread Nir Soffer
On Thu, Apr 14, 2016 at 12:02 PM, Fred Rolland  wrote:
> From the log, we can see that the lvextend command took 18 sec, which is
> quite long.

Fred, can you run repoplot on this log file? it will may explain why this lvm
call took 18 seconds.

Nir

>
> 60decf0c-6d9a-4c3b-bee6-de9d2ff05e85::DEBUG::2016-04-13
> 10:52:06,759::lvm::290::Storage.Misc.excCmd::(cmd) /usr/bin/taskset
> --cpu-list 0-23 /usr/bin/sudo -n /usr/sbin/lvm lvextend --config ' devices {
> preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1
> write_cache_state=0 disable_after_error_count=3 filter = [
> '\''a|/dev/mapper/36000eb3a4f1acbc20043|'\'', '\''r|.*|'\'' ] }
> global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1
> use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --autobackup
> n --size 6016m
> 5de4a000-a9c4-489c-8eee-10368647c413/721d09bc-60e7-4310-9ba2-522d2a4b03d0
> (cwd None)
> 
> 60decf0c-6d9a-4c3b-bee6-de9d2ff05e85::DEBUG::2016-04-13
> 10:52:22,217::lvm::290::Storage.Misc.excCmd::(cmd) SUCCESS:  = '
> WARNING: lvmetad is running but disabled. Restart lvmetad before enabling
> it!\n  WARNING: This metadata update is NOT backed up\n';  = 0
>
>
> The watermark can be configured by the following value:
>
> 'volume_utilization_percent', '50',
> 'Together with volume_utilization_chunk_mb, set the minimal free '
> 'space before a thin provisioned block volume is extended. Use '
> 'lower values to extend earlier.')
>
> On Thu, Apr 14, 2016 at 11:42 AM, Michal Skrivanek
>  wrote:
>>
>>
>> > On 14 Apr 2016, at 09:57, nico...@devels.es wrote:
>> >
>> > Ok, that makes sense, thanks for the insight both Alex and Fred. I'm
>> > attaching the VDSM log of the SPM node at the time of the pause. I couldn't
>> > find anything that would clearly identify the problem, but maybe you'll be
>> > able to.
>>
>> In extreme conditions it will happen. When your storage is slow to respond
>> to extension request, and when your write rate is very high then it may
>> happen, as it is happening to you, that you run out space sooner than the
>> extension finishes. You can change the watermark value I guess(right,
>> Fred?), but better would be to plan a bit more ahead and either use
>> preallocated or create thin and then allocate expected size in advance
>> before the operation causing it (typically it only happens during untarring
>> gigabytes of data, or huge database dump/restore)
>> Even then, the VM should always be automatially resumed once the disk
>> space is allocated
>>
>> Thanks,
>> michal
>>
>> >
>> > Thanks.
>> >
>> > Regards.
>> >
>> > El 2016-04-13 13:09, Fred Rolland escribió:
>> >> Hi,
>> >> Yes, just as Alex explained, if the disk has been created as thin
>> >> provisioning, the vdsm will extends once a watermark is reached.
>> >> Usually it should not get to the state the Vm is paused.
>> >> From the log, you can see that the request for extension has been sent
>> >> before the VM got to the No Space Error.
>> >> Later, we can see the VM resuming.
>> >> INFO::2016-04-13
>> >> 10:52:04,182::vm::1026::virt.vm::(extendDrivesIfNeeded)
>> >> vmId=`f9cd282e-110a-4896-98d3-6d320662744d`::Requesting extension for
>> >> volume
>> >> 
>> >> INFO::2016-04-13 10:52:29,360::vm::3728::virt.vm::(onIOError)
>> >> vmId=`f9cd282e-110a-4896-98d3-6d320662744d`::abnormal vm stop device
>> >> virtio-disk0 error enospc
>> >> 
>> >> INFO::2016-04-13 10:52:54,317::vm::5084::virt.vm::(_logGuestCpuStatus)
>> >> vmId=`f9cd282e-110a-4896-98d3-6d320662744d`::CPU running: onResume
>> >> Note that the extension is done on the SPM host, so it would be
>> >> interesting to see the vdsm log from the host that was in SPM role at
>> >> this timeframe.
>> >> Regards,
>> >> Fred
>> >> On Wed, Apr 13, 2016 at 2:43 PM, Alex Crow 
>> >> wrote:
>> >>> Hi,
>> >>> If you have set up VM disks as Thin Provisioned, the VM has to
>> >>> pause when the disk image needs to expand. You won't see this on VMs
>> >>> with preallocated storage.
>> >>> It's not the SAN that's running out of space, it's the VM image
>> >>> needing to be expanded incrementally each time.
>> >>> Cheers
>> >>> Alex
>> >>> On 13/04/16 12:04, nico...@devels.es wrote:
>> >>> Hi Fred,
>> >>> This is an iSCSI storage. I'm attaching the VDSM logs from the host
>> >>> where this machine has been running. Should you need any further
>> >>> info, don't hesitate to ask.
>> >>> Thanks.
>> >>> Regards.
>> >>> El 2016-04-13 11:54, Fred Rolland escribió:
>> >>> Hi,
>> >>> What kind of storage do you have ? (ISCSI,FC,NFS...)
>> >>> Can you provide the vdsm logs from the host where this VM runs ?
>> >>> Thanks,
>> >>> Freddy
>> >>> On Wed, Apr 13, 2016 at 1:02 PM,  wrote:
>> >>> Hi,
>> >>> We're running oVirt 3.6.4.1-1. Lately we're seeing a bunch of
>> >>> events like these:
>> >>> 2016-04-13 10:52:30,735 INFO
>> >>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer]
>> >>> (DefaultQuartzScheduler_Worker-86) [60dea18f] VM
>> >>> 'f9cd282e-110a-4896-98d3-6d3206

Re: [ovirt-users] VM has been paused due to no Storage space error.

2016-04-14 Thread Fred Rolland
>From the log, we can see that the lvextend command took 18 sec, which is
quite long.

60decf0c-6d9a-4c3b-bee6-de9d2ff05e85::DEBUG::2016-04-13
10:52:06,759::lvm::290::Storage.Misc.excCmd::(cmd) /usr/bin/taskset
--cpu-list 0-23 /usr/bin/sudo -n /usr/sbin/lvm lvextend --config ' devices
{ preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1
write_cache_state=0 disable_after_error_count=3 filter = [
'\''a|/dev/mapper/36000eb3a4f1acbc20043|'\'', '\''r|.*|'\'' ]
}  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1
use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } '
--autobackup n --size 6016m
5de4a000-a9c4-489c-8eee-10368647c413/721d09bc-60e7-4310-9ba2-522d2a4b03d0
(cwd None)

60decf0c-6d9a-4c3b-bee6-de9d2ff05e85::DEBUG::2016-04-13
10:52:22,217::lvm::290::Storage.Misc.excCmd::(cmd) SUCCESS:  = '
WARNING: lvmetad is running but disabled. Restart lvmetad before enabling
it!\n  WARNING: This metadata update is NOT backed up\n';  = 0


The watermark can be configured by the following value:

'volume_utilization_percent', '50',
'Together with volume_utilization_chunk_mb, set the minimal free '
'space before a thin provisioned block volume is extended. Use '
'lower values to extend earlier.')

On Thu, Apr 14, 2016 at 11:42 AM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
> > On 14 Apr 2016, at 09:57, nico...@devels.es wrote:
> >
> > Ok, that makes sense, thanks for the insight both Alex and Fred. I'm
> attaching the VDSM log of the SPM node at the time of the pause. I couldn't
> find anything that would clearly identify the problem, but maybe you'll be
> able to.
>
> In extreme conditions it will happen. When your storage is slow to respond
> to extension request, and when your write rate is very high then it may
> happen, as it is happening to you, that you run out space sooner than the
> extension finishes. You can change the watermark value I guess(right,
> Fred?), but better would be to plan a bit more ahead and either use
> preallocated or create thin and then allocate expected size in advance
> before the operation causing it (typically it only happens during untarring
> gigabytes of data, or huge database dump/restore)
> Even then, the VM should always be automatially resumed once the disk
> space is allocated
>
> Thanks,
> michal
>
> >
> > Thanks.
> >
> > Regards.
> >
> > El 2016-04-13 13:09, Fred Rolland escribió:
> >> Hi,
> >> Yes, just as Alex explained, if the disk has been created as thin
> >> provisioning, the vdsm will extends once a watermark is reached.
> >> Usually it should not get to the state the Vm is paused.
> >> From the log, you can see that the request for extension has been sent
> >> before the VM got to the No Space Error.
> >> Later, we can see the VM resuming.
> >> INFO::2016-04-13
> >> 10:52:04,182::vm::1026::virt.vm::(extendDrivesIfNeeded)
> >> vmId=`f9cd282e-110a-4896-98d3-6d320662744d`::Requesting extension for
> >> volume
> >> 
> >> INFO::2016-04-13 10:52:29,360::vm::3728::virt.vm::(onIOError)
> >> vmId=`f9cd282e-110a-4896-98d3-6d320662744d`::abnormal vm stop device
> >> virtio-disk0 error enospc
> >> 
> >> INFO::2016-04-13 10:52:54,317::vm::5084::virt.vm::(_logGuestCpuStatus)
> >> vmId=`f9cd282e-110a-4896-98d3-6d320662744d`::CPU running: onResume
> >> Note that the extension is done on the SPM host, so it would be
> >> interesting to see the vdsm log from the host that was in SPM role at
> >> this timeframe.
> >> Regards,
> >> Fred
> >> On Wed, Apr 13, 2016 at 2:43 PM, Alex Crow 
> >> wrote:
> >>> Hi,
> >>> If you have set up VM disks as Thin Provisioned, the VM has to
> >>> pause when the disk image needs to expand. You won't see this on VMs
> >>> with preallocated storage.
> >>> It's not the SAN that's running out of space, it's the VM image
> >>> needing to be expanded incrementally each time.
> >>> Cheers
> >>> Alex
> >>> On 13/04/16 12:04, nico...@devels.es wrote:
> >>> Hi Fred,
> >>> This is an iSCSI storage. I'm attaching the VDSM logs from the host
> >>> where this machine has been running. Should you need any further
> >>> info, don't hesitate to ask.
> >>> Thanks.
> >>> Regards.
> >>> El 2016-04-13 11:54, Fred Rolland escribió:
> >>> Hi,
> >>> What kind of storage do you have ? (ISCSI,FC,NFS...)
> >>> Can you provide the vdsm logs from the host where this VM runs ?
> >>> Thanks,
> >>> Freddy
> >>> On Wed, Apr 13, 2016 at 1:02 PM,  wrote:
> >>> Hi,
> >>> We're running oVirt 3.6.4.1-1. Lately we're seeing a bunch of
> >>> events like these:
> >>> 2016-04-13 10:52:30,735 INFO
> >>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer]
> >>> (DefaultQuartzScheduler_Worker-86) [60dea18f] VM
> >>> 'f9cd282e-110a-4896-98d3-6d320662744d'(vm.domain.com [1] [1]) moved
> >>> from
> >>> 'Up' --> 'Paused'
> >>> 2016-04-13 10:52:30,815 INFO
> >> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> >>> (DefaultQuartzScheduler_Worker-86) [60dea18f] Correlation ID: null,
> >>> Call Stack: 

Re: [ovirt-users] VM has been paused due to no Storage space error.

2016-04-14 Thread Michal Skrivanek

> On 14 Apr 2016, at 09:57, nico...@devels.es wrote:
> 
> Ok, that makes sense, thanks for the insight both Alex and Fred. I'm 
> attaching the VDSM log of the SPM node at the time of the pause. I couldn't 
> find anything that would clearly identify the problem, but maybe you'll be 
> able to.

In extreme conditions it will happen. When your storage is slow to respond to 
extension request, and when your write rate is very high then it may happen, as 
it is happening to you, that you run out space sooner than the extension 
finishes. You can change the watermark value I guess(right, Fred?), but better 
would be to plan a bit more ahead and either use preallocated or create thin 
and then allocate expected size in advance before the operation causing it 
(typically it only happens during untarring gigabytes of data, or huge database 
dump/restore)
Even then, the VM should always be automatially resumed once the disk space is 
allocated

Thanks,
michal

> 
> Thanks.
> 
> Regards.
> 
> El 2016-04-13 13:09, Fred Rolland escribió:
>> Hi,
>> Yes, just as Alex explained, if the disk has been created as thin
>> provisioning, the vdsm will extends once a watermark is reached.
>> Usually it should not get to the state the Vm is paused.
>> From the log, you can see that the request for extension has been sent
>> before the VM got to the No Space Error.
>> Later, we can see the VM resuming.
>> INFO::2016-04-13
>> 10:52:04,182::vm::1026::virt.vm::(extendDrivesIfNeeded)
>> vmId=`f9cd282e-110a-4896-98d3-6d320662744d`::Requesting extension for
>> volume
>> 
>> INFO::2016-04-13 10:52:29,360::vm::3728::virt.vm::(onIOError)
>> vmId=`f9cd282e-110a-4896-98d3-6d320662744d`::abnormal vm stop device
>> virtio-disk0 error enospc
>> 
>> INFO::2016-04-13 10:52:54,317::vm::5084::virt.vm::(_logGuestCpuStatus)
>> vmId=`f9cd282e-110a-4896-98d3-6d320662744d`::CPU running: onResume
>> Note that the extension is done on the SPM host, so it would be
>> interesting to see the vdsm log from the host that was in SPM role at
>> this timeframe.
>> Regards,
>> Fred
>> On Wed, Apr 13, 2016 at 2:43 PM, Alex Crow 
>> wrote:
>>> Hi,
>>> If you have set up VM disks as Thin Provisioned, the VM has to
>>> pause when the disk image needs to expand. You won't see this on VMs
>>> with preallocated storage.
>>> It's not the SAN that's running out of space, it's the VM image
>>> needing to be expanded incrementally each time.
>>> Cheers
>>> Alex
>>> On 13/04/16 12:04, nico...@devels.es wrote:
>>> Hi Fred,
>>> This is an iSCSI storage. I'm attaching the VDSM logs from the host
>>> where this machine has been running. Should you need any further
>>> info, don't hesitate to ask.
>>> Thanks.
>>> Regards.
>>> El 2016-04-13 11:54, Fred Rolland escribió:
>>> Hi,
>>> What kind of storage do you have ? (ISCSI,FC,NFS...)
>>> Can you provide the vdsm logs from the host where this VM runs ?
>>> Thanks,
>>> Freddy
>>> On Wed, Apr 13, 2016 at 1:02 PM,  wrote:
>>> Hi,
>>> We're running oVirt 3.6.4.1-1. Lately we're seeing a bunch of
>>> events like these:
>>> 2016-04-13 10:52:30,735 INFO 
>>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer]
>>> (DefaultQuartzScheduler_Worker-86) [60dea18f] VM
>>> 'f9cd282e-110a-4896-98d3-6d320662744d'(vm.domain.com [1] [1]) moved
>>> from
>>> 'Up' --> 'Paused'
>>> 2016-04-13 10:52:30,815 INFO 
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> (DefaultQuartzScheduler_Worker-86) [60dea18f] Correlation ID: null,
>>> Call Stack: null, Custom Event ID: -1, Message: VM vm.domain.com
>>> [1] [1]
>>> has been paused.
>>> 2016-04-13 10:52:30,898 ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> (DefaultQuartzScheduler_Worker-86) [60dea18f] Correlation ID: null,
>>> Call Stack: null, Custom Event ID: -1, Message: VM vm.domain.com
>>> [1] [1]
>>> has been paused due to no Storage space error.
>>> 2016-04-13 10:52:52,320 WARN 
>>> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
>>> (org.ovirt.thread.pool-8-thread-38) [] domain
>>> '5de4a000-a9c4-489c-8eee-10368647c413:iscsi01' in problem. vds:
>>> 'host6.domain.com [2] [2]'
>>> 2016-04-13 10:52:55,183 INFO 
>>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer]
>>> (DefaultQuartzScheduler_Worker-70) [3da0f3d4] VM
>>> 'f9cd282e-110a-4896-98d3-6d320662744d'(vm.domain.com [1] [1]) moved
>>> from
>>> 'Paused' --> 'Up'
>>> 2016-04-13 10:52:55,318 INFO 
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> (DefaultQuartzScheduler_Worker-70) [3da0f3d4] Correlation ID: null,
>>> Call Stack: null, Custom Event ID: -1, Message: VM vm.domain.com
>>> [1] [1]
>>> has recovered from paused back to up.
>>> The storage domain is far from being full, though (400+ G available
>>> right now). Could this be related to this other issue [1]? If not,
>>> how could I debug what's going on?
>>> Thanks.
>>>  [1]: https://www.mail-archive.com/users@ovirt.org/msg32079.html
>>> [3]
>>> [3]
>>> ___

Re: [ovirt-users] Ovirt VM to kvm image

2016-04-14 Thread Budur Nagaraju
Thank you very much ! its working :)

On Wed, Apr 13, 2016 at 9:05 PM, Nathanaël Blanchet 
wrote:

> No need to convert anything, a2ef36fa-ecfa-4138-8f19-2f7609276d4b is
> alreay the raw file you need. You can rsync it and rename it to myvm.img.
>
>
> Le 13/04/2016 17:28, Budur Nagaraju a écrit :
>
> I have exported the vm to the export_domain below are the two files I
> found in the path
> "/var/lib/exports/export_domain/de23c906-bb57-4d78-9d50-041171b498f2/images/92fc9aa1-cad7-4562-b289-3795573cbb94"
>
> a2ef36fa-ecfa-4138-8f19-2f7609276d4b
>  a2ef36fa-ecfa-4138-8f19-2f7609276d4b.meta
>
> can we convert from these two files ?
>
> On Wed, Apr 13, 2016 at 8:40 PM, Nathanaël Blanchet < 
> blanc...@abes.fr> wrote:
>
>> On any host of the cluster, you will find the export domain mount point
>> under /rhev/data-center/mnt/MOUNT
>> Then, you can apply a succession of grep command to find the ids you need
>> form $MOUNT, for example
>>
>>- find the vm:
>>ID=$(grep $VMNAME -r master/vms/ | awk -F "/" '{ print $3 }') //
>>gives the vm id
>>- find the disk id :
>>DISK=grep 'fileRef="' master/vms/$ID | awk -F ' ' '{print $5}' | awk
>>-F \" '{print $2}' // gives the disk id
>>- Copy the image anywhere you want.
>>rsync -av images/DISK mount_point:/$VMNAME.img
>>
>> Note, you can do the same directly on any file system storage domain
>> based, but be careful that the vm is down before.
>> If you want to do the same with a block storage domain, you may use dd
>> instead of rsync.
>>
>>
>>
>>
>> Le 13/04/2016 16:27, Budur Nagaraju a écrit :
>>
>> I have exported VM to export domain,may I know the tools to or commands
>> to convert ?
>> On Apr 13, 2016 7:53 PM, "Nathanaël Blanchet"  wrote:
>>
>>> Yes it is doable by searching relative ids on the storage, but the
>>> simpliest way to do such a thing is exporting your vm via the export
>>> domain, then the disk will be in raw format on the nfs share. Finally, you
>>> may manually redefine your vm properties  to libvirt/kvm.
>>>
>>> Le 13/04/2016 14:00, Budur Nagaraju a écrit :
>>>
>>> Hi
>>>
>>> Is there anyways to convert ovirt VM to kvm .IMG VM ?
>>>
>>> Thanks,
>>> Nagaraju
>>>
>>>
>>> ___
>>> Users mailing 
>>> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>> --
>>> Nathanaël Blanchet
>>>
>>> Supervision réseau
>>> Pôle Infrastrutures Informatiques
>>> 227 avenue Professeur-Jean-Louis-Viala
>>> 34193 MONTPELLIER CEDEX 5   
>>> Tél. 33 (0)4 67 54 84 55
>>> Fax  33 (0)4 67 54 84 14blanc...@abes.fr
>>>
>>>
>> --
>> Nathanaël Blanchet
>>
>> Supervision réseau
>> Pôle Infrastrutures Informatiques
>> 227 avenue Professeur-Jean-Louis-Viala
>> 34193 MONTPELLIER CEDEX 5
>> Tél. 33 (0)4 67 54 84 55
>> Fax  33 (0)4 67 54 84 14blanc...@abes.fr
>>
>>
>
> --
> Nathanaël Blanchet
>
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5 
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14blanc...@abes.fr
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Educational use case question

2016-04-14 Thread Gianluca Cecchi
On Thu, Apr 14, 2016 at 8:15 AM, Yedidyah Bar David  wrote:

> On Thu, Apr 14, 2016 at 5:18 AM, Michael Hall  wrote:
>
>
> 3. NFS
> loop-back mounting nfs is considered risky, due to potential locking
> issues. Therefore, if you want to use NFS, you are better off doing
> something like this:
>
>
Hello,
can you give more details about these potential locking issues? So that I
can reproduce
I have 2 little environments where I'm using this kind of setup. In one of
them the hypervisor is a physical server, in the other one the hypervisor
is itself a libvirt VM inside a Fedora 23 based laptop. oVirt version is
3.6.4 on both.

The test VM has 2 disks sda and sdb; all ovirt related stuff on sdb

My raw steps for the lab have been, after setting up CentOS 7.2 OS,
disabling ipv6 and NetworkManager, putting SELinux to permissive and
enabling ovirt repo:

NOTE: I also stop and disable firewalld

My host is ovc72.localdomain.local and name of my future engine
shengine.localdomain.local

yum -y update

yum install ovirt-hosted-engine-setup ovirt-engine-appliance

yum install rpcbind nfs-utils nfs-server
(some of them probably already pulled in as dependencies from previous
command)

When I start from scratch the system

pvcreate /dev/sdb
vgcreate OVIRT_DOMAIN /dev/sdb
lvcreate -n ISO_DOMAIN -L 5G OVIRT_DOMAIN
lvcreate -n SHE_DOMAIN -L 25G OVIRT_DOMAIN
lvcreate -n NFS_DOMAIN -l +100%FREE OVIRT_DOMAIN

if I only have to reinitialize I start from here
mkfs -t xfs -f /dev/mapper/OVIRT_DOMAIN-ISO_DOMAIN
mkfs -t xfs -f /dev/mapper/OVIRT_DOMAIN-NFS_DOMAIN
mkfs -t xfs -f /dev/mapper/OVIRT_DOMAIN-SHE_DOMAIN

mkdir /ISO_DOMAIN /NFS_DOMAIN /SHE_DOMAIN

/etc/fstab
/dev/mapper/OVIRT_DOMAIN-ISO_DOMAIN /ISO_DOMAIN xfs defaults0 0
/dev/mapper/OVIRT_DOMAIN-NFS_DOMAIN /NFS_DOMAIN xfs defaults0 0
/dev/mapper/OVIRT_DOMAIN-SHE_DOMAIN /SHE_DOMAIN xfs defaults0 0

mount /ISO_DOMAIN/--> this for ISO images
mount /NFS_DOMAIN/   ---> this for data storage domain where your VMs will
live (NFS based)
mount /SHE_DOMAIN/   --> this is for the HE VM

chown 36:36 /ISO_DOMAIN
chown 36:36 /NFS_DOMAIN
chown 36:36 /SHE_DOMAIN

chmod 0755 /ISO_DOMAIN
chmod 0755 /NFS_DOMAIN
chmod 0755 /SHE_DOMAIN

/etc/exports
/ISO_DOMAIN   *(rw,anonuid=36,anongid=36,all_squash)
/NFS_DOMAIN   *(rw,anonuid=36,anongid=36,all_squash)
/SHE_DOMAIN   *(rw,anonuid=36,anongid=36,all_squash)

systemctl enable rpcbind
systemctl start rpcbind

systemctl enable nfs-server
systemctl start nfs-server

hosted-engine --deploy

During setup I choose:

  Engine FQDN: shengine.localdomain.local

  Firewall manager   : iptables

  Storage connection :
ovc71.localdomain.local:/SHE_DOMAIN

  OVF archive (for disk boot):
/usr/share/ovirt-engine-appliance/ovirt-engine-appliance-20151015.0-1.el7.centos.ova

Also, I used the appliance provided by ovirt-engine-appliance package

After install you have to make a dependency so that VDSM Broker starts
after NFS Server

In /usr/lib/systemd/system/ovirt-ha-broker.service

Added in section  [Unit] the line:

After=nfs-server.service

Also in file vdsmd.service changed from:
After=multipathd.service libvirtd.service iscsid.service rpcbind.service \
  supervdsmd.service sanlock.service vdsm-network.service

to:
After=multipathd.service libvirtd.service iscsid.service rpcbind.service \
  supervdsmd.service sanlock.service vdsm-network.service \
  nfs-server.service

NOTE: the files will be overwritten by future updates, so you have to keep
in mind...

On ovc72 in /etc/multipath.conf aright after line
# VDSM REVISION 1.3

added
# RHEV PRIVATE

blacklist {
wwid 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1
wwid 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0
}

To exclude both 2 internal drives... probably oVirt keeps in mind only the
first one?
Otherwise many messages like:
Jan 25 11:02:00 ovc72 kernel: device-mapper: table: 253:6: multipath: error
getting device
Jan 25 11:02:00 ovc72 kernel: device-mapper: ioctl: error adding target to
table

So far I didn't find any problems. Only a little trick when you have to
make ful lmaintenance where you have to power off the (only) hypervisor,
where you have to make the right order steps.

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Educational use case question

2016-04-14 Thread Martin Sivak
Hi,

> But the project doesn't look ready to go and I can't find a download.

I think that is one of the unfortunate effects of how the website was
converted. Check the At a glance section, it says the status is
Released. We have had it released since oVirt 3.3 with significant
improvements in 3.4 and 3.6.

It is used in production world wide now. That said.. we have a
deployment related bug in 3.6, but all should be perfectly fine if you
have just a single host.


Best regards

--
Martin Sivak
SLA / oVirt

On Thu, Apr 14, 2016 at 4:18 AM, Michael Hall  wrote:
> Thanks for the response.
>
> I did see that page and certainly agree with the point under "Benefit to
> oVirt" heading:
>
> "This operational mode will attract users already familiar with it from
> other virt platforms."
>
> I'm happy building headless servers using CLI over SSH, but my colleague and
> students aren't and need a "nice" point and click web interface which will
> display a usable VM desktop etc. My colleague is most familiar with VMware.
>
> But the project doesn't look ready to go and I can't find a download.
> Also, an implementation that isn't stable and fully functional will probably
> do more damage than good as far as open source's rep in our lab goes.
>
> I know this isn't a use case that oVirt or RedHat are really interested in,
> but I feel it is important to expose students to real world production
> software and systems as much as possible ... all we had to work with last
> year was VirtualBox running on Windows 7!
>
> Mike
>
> On Thu, Apr 14, 2016 at 11:37 AM, Yair Zaslavsky 
> wrote:
>>
>> As far as I remember, oVirt does come with an all in one configuration ,
>> but looks like it was deprecated at 3.6, So can you try out the self hosted
>> engine?
>>
>>
>> https://www.ovirt.org/develop/release-management/features/engine/self-hosted-engine/
>>
>>
>>
>> 
>> From: "Michael Hall" 
>> To: users@ovirt.org
>> Sent: Thursday, 14 April, 2016 11:10:03 AM
>> Subject: [ovirt-users] Educational use case question
>>
>>
>> Hi
>>
>> I am teaching IT subjects in TAFE (a kind of post-secondary technical
>> college) in Australia.
>>
>> We are currently looking for a virtualisation platform that will allow
>> students to install and manage VMs via web interface.
>>
>> VMware is being proposed but I am trying to get KVM and the RedHat
>> ecosystem in the lab as much as possible.
>>
>> I have reasonable experience with running virt manager on CentOS 7, but
>> oVirt is new. I have it installed and running OK but am not sure how to
>> proceed with configuration.
>>
>> I basically want to run a single physical server which will be the KVM
>> host, the ISO and data store, and the home of oVirt engine ... in other
>> words a complete oVirt-managed KVM virtualisation platform running on one
>> physical machine (32GB RAM). It will only ever need to run a handful of VMs
>> with little or no real data or load. Is this possible/feasible?
>>
>> If possible/feasible, where should oVirt engine go ... on the host itself,
>> or into a VM guest?
>>
>> The web interface is what is making oVirt an attractive option at this
>> stage, as students will be working from Windows clients on a corporate
>> network. Do VM GUI display well in the browser?
>>
>> Thanks for any advice
>>
>> Mike Hall
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM has been paused due to no Storage space error.

2016-04-14 Thread nicolas
Ok, that makes sense, thanks for the insight both Alex and Fred. I'm 
attaching the VDSM log of the SPM node at the time of the pause. I 
couldn't find anything that would clearly identify the problem, but 
maybe you'll be able to.


Thanks.

Regards.

El 2016-04-13 13:09, Fred Rolland escribió:

Hi,

Yes, just as Alex explained, if the disk has been created as thin
provisioning, the vdsm will extends once a watermark is reached.
Usually it should not get to the state the Vm is paused.

From the log, you can see that the request for extension has been sent
before the VM got to the No Space Error.
Later, we can see the VM resuming.

INFO::2016-04-13
10:52:04,182::vm::1026::virt.vm::(extendDrivesIfNeeded)
vmId=`f9cd282e-110a-4896-98d3-6d320662744d`::Requesting extension for
volume

INFO::2016-04-13 10:52:29,360::vm::3728::virt.vm::(onIOError)
vmId=`f9cd282e-110a-4896-98d3-6d320662744d`::abnormal vm stop device
virtio-disk0 error enospc

INFO::2016-04-13 10:52:54,317::vm::5084::virt.vm::(_logGuestCpuStatus)
vmId=`f9cd282e-110a-4896-98d3-6d320662744d`::CPU running: onResume

Note that the extension is done on the SPM host, so it would be
interesting to see the vdsm log from the host that was in SPM role at
this timeframe.

Regards,

Fred

On Wed, Apr 13, 2016 at 2:43 PM, Alex Crow 
wrote:


Hi,

If you have set up VM disks as Thin Provisioned, the VM has to
pause when the disk image needs to expand. You won't see this on VMs
with preallocated storage.

It's not the SAN that's running out of space, it's the VM image
needing to be expanded incrementally each time.

Cheers

Alex

On 13/04/16 12:04, nico...@devels.es wrote:
Hi Fred,

This is an iSCSI storage. I'm attaching the VDSM logs from the host
where this machine has been running. Should you need any further
info, don't hesitate to ask.

Thanks.

Regards.

El 2016-04-13 11:54, Fred Rolland escribió:
Hi,

What kind of storage do you have ? (ISCSI,FC,NFS...)
Can you provide the vdsm logs from the host where this VM runs ?

Thanks,

Freddy

On Wed, Apr 13, 2016 at 1:02 PM,  wrote:

Hi,

We're running oVirt 3.6.4.1-1. Lately we're seeing a bunch of
events like these:

2016-04-13 10:52:30,735 INFO 
[org.ovirt.engine.core.vdsbroker.VmAnalyzer]
(DefaultQuartzScheduler_Worker-86) [60dea18f] VM
'f9cd282e-110a-4896-98d3-6d320662744d'(vm.domain.com [1] [1]) moved
from
'Up' --> 'Paused'
2016-04-13 10:52:30,815 INFO 



[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]


(DefaultQuartzScheduler_Worker-86) [60dea18f] Correlation ID: null,

Call Stack: null, Custom Event ID: -1, Message: VM vm.domain.com
[1] [1]
has been paused.
2016-04-13 10:52:30,898 ERROR



[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]


(DefaultQuartzScheduler_Worker-86) [60dea18f] Correlation ID: null,

Call Stack: null, Custom Event ID: -1, Message: VM vm.domain.com
[1] [1]
has been paused due to no Storage space error.
2016-04-13 10:52:52,320 WARN 
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-38) [] domain
'5de4a000-a9c4-489c-8eee-10368647c413:iscsi01' in problem. vds:
'host6.domain.com [2] [2]'
2016-04-13 10:52:55,183 INFO 
[org.ovirt.engine.core.vdsbroker.VmAnalyzer]
(DefaultQuartzScheduler_Worker-70) [3da0f3d4] VM
'f9cd282e-110a-4896-98d3-6d320662744d'(vm.domain.com [1] [1]) moved
from
'Paused' --> 'Up'
2016-04-13 10:52:55,318 INFO 



[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]


(DefaultQuartzScheduler_Worker-70) [3da0f3d4] Correlation ID: null,

Call Stack: null, Custom Event ID: -1, Message: VM vm.domain.com
[1] [1]
has recovered from paused back to up.

The storage domain is far from being full, though (400+ G available

right now). Could this be related to this other issue [1]? If not,
how could I debug what's going on?

Thanks.

 [1]: https://www.mail-archive.com/users@ovirt.org/msg32079.html
[3]
[3]
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users [4] [4]

Links:
--
[1] http://vm.domain.com [1]
[2] http://host6.domain.com [2]
[3] https://www.mail-archive.com/users@ovirt.org/msg32079.html [3]
[4] http://lists.ovirt.org/mailman/listinfo/users [4]


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users [4]

--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute
advice.
The information provided is correct to our knowledge & belief and must
not
be used as a substitute for obtaining tax, regulatory, investment,
legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, Lon